Celebrating 80 Years: Dreaming AI’s Mind-Bending Potential

Artificial intelligence, especially generative AI, dominated all things tech in 2023, generating a banner year on Wall Street, new applications and regulations, doomsday scenarios and baited-breath expectations. In short, a flood of hallucinations, humanity’s favorite mode of escapism, especially popular among people that can afford to indulge in imagining a different, better, more intelligible world, of their own creation.

The particular genre of fabricated, distorted, invented reality that claims to be based on “science,” got its start eighty years ago this month. Neurophysiologist Warren S. McCulloch and logician Walter Pitts published “A Logical Calculus of the Ideas Immanent in Nervous Activity” in the December 1943 issue of The Bulletin of Mathematical Biophysics. It later became the inspiration for the development of computer-based “artificial neural networks” and their popular description as “mimicking the brain.”

Scientists know today much more about the brain than they knew in 1943, but “we’re still in the dark about how it works” according to the Allen Institute. But writing a paper presenting, according to McCulloch’s biographer Tara Abraham, “a theoretical account of the logical relations between idealized neurons, with purported implications for how the central nervous system functioned as a whole,” did not require any empirical knowledge.

McCulloch and Pitts needed to make “certain theoretical presuppositions,” specifically that “the activity of a neuron is an all-or-none process and that the structure of the net does not change with time. While McCulloch and Pitts admitted that this was an abstraction, they emphasized that their goal was not to present a factual description of neurons, but rather to design ‘fictitious nets’ composed of neurons whose connections and thresholds are unaltered.”

McCulloch and Pitts’ theory, especially its implications that sensory input was going straight to the brain where it was processed by their presumed digital (on and off, ones and zeroes) neurons, was tested by experiments on frogs conducted by their friend and colleague, Jerome Lettvin. Together with McCulloch, Pitts and the biologist Humberto Maturana, Lettvin subjected the frogs to various visual experiences and recorded the information the eye sent to the brain. “To everyone’s surprise,” writes Amanda Gefter, “instead of the brain computing information digital neuron by digital neuron using the exacting implement of mathematical logic, messy, analog processes in the eye were doing at least part of the interpretive work.”

The McCulloch and Pitts theory was the inspiration for “connectionism,” the specific variant of artificial intelligence dominant today (now called “deep learning”), which its aficionados have finally succeeded in realizing in real-world applications. The development and embellishment of the McCulloch and Pitts hallucination about neurons firing or not firing continued in 1949, when psychologist Donald Hebb advanced a theory of how neural networks could learn. Hebb’s theory is often summarized as “neurons that fire together wire together,” arguing that synapses—the connections between neurons—strengthen over time with the repeated reactivation of one neuron by another or weaken in the absence of such reactivation.

Today’s AI aficionados are not bothered by the absence of facts, nor by the presence of experimental facts that contradict the theory they rely on. In the 2017 paper “Neuroscience-Inspired Artificial Intelligence,” the authors, led by Demis Hassabis (co-founder of DeepMind, currently leading Google’s AI work), wrote that McCulloch, Pitts, and Hebb ”opened up the field of artificial neural network research, and they continue to provide the foundation for contemporary research on deep learning.”

The subsequent evolution of modern computers—pushed forward by the human intelligence of the engineers developing them—added more and more functionality to computers, all the way to today’s efficient text and image processing. Artificial intelligence is what computers do and what computer engineers invent without any understanding of how our brains work. Hallucinations about “artificial general intelligence” or AGI may motivate some of them, but they do not contribute at all to their success in steadily expanding what computers can do.