Weird A.I.
by B. Lindquist,
in IEEE Annals of the History of Computing, vol. 48, no. 1, pp. 66-72, Jan.-March 2026.
https://dspace.mit.edu/bitstream/handle/1721.1/165227/Lindquist%20-%20Weird%20AI.pdf
"What, exactly, are we unleashing?"
... When Frank Rosenblatt unveiled the perceptron to the public in 1958, he insisted on a place for the non-rational in machine learning. He acknowledged that symbolic logic had led to powerful advances in control systems and digital computing. But he also saw the limits of logic. Unlike traditional machines, he wrote, the brain—and by extension, his perceptron—operated “intuitively, rather than analytically.” Perhaps predictably, his insistence on foregrounding computational intuition over reasoning didn’t endear him to critics. The famous cybernetician, Stafford Beer, dismissed him in 1958 for his “lack of logical distinctness.” Indeed, “connectionism”—an early name for research into artificial neural networks like Rosenblatt’s—imagined thought not as logical or distinct, but rather as a cascade of associations. The term itself evokes this shift: cognition as connection. As Margaret Boden puts it, connectionism “puts computational flesh onto the associative bones sketched by Freud,” adding that the power of association “suggests how haunting images can arise, whether neurotic or poetic.”
Much has changed since the early days of connectionism—big data, for one—but Boden’s picture still holds. At least Geoffrey Hinton, the figure most associated with driving neural nets deep and harnessing big data, seems to think so. In a 2018 interview, Hinton remarked that neural networks remain “much closer to Freud.” By this, he meant that humans, and now machines,
possess only a “thin film of consciousness and deliberate reasoning, with all this seething stuff underneath.” That seething underlayer, he explained, disregards the tidy rules of conscious thought and operates instead by way of loose analogy.
Clearly, computing no longer plays by the old rules. And, historically, neural networks never did. Yet even as neural networks take center stage, we continue to analyze them using older archetypes that focus on rigid logic. We still trot out familiar touchstones—algorithms, centralized storage, information processing—as if computing still boiled down to if/then commands with outcomes fixed in advance. But today’s A.I. systems no longer follow explicit lists of instructions. Their reasoning is emergent, far closer to the PDP vision than to midcentury sequential logic. To understand the past and present of artificial neural networks, we need to move beyond established models and account for how emotion, and even irrationality, have been designed into, and emerged from, these systems.
In this regard, specialists who study and write about computing lag behind popular culture. Sentient machines and rebel robots have long been staples of science fiction, but scholars have traditionally distanced themselves from these depictions in favor of emphasizing control, war, and logic as the field’s foundations. To be sure, some historians have explored the limits of self-perceived rationality and highlighted the countercultural elements of computing. But these studies have usually been published in isolation. When viewed in combination, however, their scattered themes cohere into something more: They challenge the conception of the computer as an Enlightenment machine, and they disrupt the presumed bond between computation and cold rationality. Disrupting this alliance has the potential to restore core humanistic concerns and expand our audience.