In her 2021 book, The Atlas Of AI Power, Kate Crawford opens her introduction with an account of “Clever Hans”. Anyone working in Data Science today might be familiar with the story, as it serves as a cautionary tale of unconscious cues and “the observer effect” that is commonly taught to students. But Crawford uses the story more artfully to illustrate some important points that challenge our popular understanding of Artificial Intelligence.
The video above gives a good summary, but to cut a long story short, Clever Hans was a horse that became famous in the early 1900’s because his trainer, Wilhelm von Osten, claimed to have taught him to perform amazing feats of arithmetic, multiplication, division and spelling. Hans became a celebrity sensation and his trainer toured Germany to great fanfare and acclaim. Crowds flocked to marvel at the horse that could do math. It was widely proclaimed that Clever Hans had the mental ability of a fourteen-year-old human, and hundreds of thousands of people around the world believed it, hook, line, and sinker.
Except that upon closer inspection by scientists and researchers, it came to light that Clever Hans was just a very clever study of human emotion and gesticulation. It was revealed that his trainer was unconsciously feeding the horse cues through his facial expressions and body posture that triggered a memory of the right response to give, based on earlier conditioning. By all accounts, poor Wilhem von Osten didn’t even know his sensational stage act was a fraud.
If you’re interested, here you can read a full account of the story written by a Psychology researcher in 1919.
Myths and legends
Kate Crawford used the Clever Hans case to highlight two pervasive myths of the burgeoning AI industry:
That nonhuman (computer or animal) minds are analogous to human minds.
That intelligence is something that can exist outside of human sentience, that it’s possible for intelligence to emerge in the absence of our physical senses and contextual understanding of the world.
Before skeptics decided to investigate Clever Hans, people were all too ready to simply believe they were witnessing a horse that could add, multiply, and subtract as well as a human child. They believed what they saw and had no reason to think too critically about what they were seeing with their own eyes. There were plenty of reputable people who made up this crowd, including politicians, military generals, mathematicians, and journalists.
We are preconditioned to accept the idea of animals or inanimate objects being able to take on human form or exhibit human behaviors
And we shouldn’t be surprised: anthropomorphism — ascribing human qualities to a nonhuman entity — is an innate tendency of the human mind. Anthropomorphism stems from an even more ancient predisposition for animism — believing that inanimate objects can have a soul or are capable of thinking and feeling. Together, these beliefs constitute the 6th law in Matthew Hutson’s 7 laws of Magical Thinking. History is full of examples of animals made human, from the Lion Man figurine crafted 40,000 years ago to the Mad Hatter and his friends to Mickey Mouse and Winnie the Pooh. We are preconditioned to accept the idea of animals or inanimate objects being able to take on human form or exhibit human behaviors, even if at the same time the areas of our brain responsible for critical thinking may be sounding the alarm. We sometimes suspend our disbelief and ignore logic purely for the entertainment value it brings, or because on some primal level, we want to believe.
So it doesn’t require much effort to understand why the spectacle of magical computers that seemingly act and ‘think’ like humans has so captured the imagination of its creators and anyone targeted by pro-AI marketing. We are inherently biased to think such things are possible, if not inevitable.
But just like in the case of Clever Hans, it’s important that we separate wishful thinking from objective reality if we want to live in a rational world. The truth is, at the time of writing, even the most sophisticated Multimodal Large Language Models are not anywhere near analogous to the human mind, and even so-called emergent capabilities, though surprising, do not equate to intelligence as we have defined it.
Computers all the way down?
The idea that LLM’s and neural nets work in a similar fashion to the human mind is arguably a lot more fiction than fact and is premised on the brain-computer metaphor of mind that reduces human cognition to computation, that is to say, our brain is just an organic computer and doesn’t run on anything mystical, spiritual or particularly distinct or too complex from anything a future thinking machine might replicate. Proponents of this school of thought include Stephen Wolfram, who goes as far as to say that the entire universe is a computer and we are merely computational beings within that supercomputer.
The brain-computer theory narrows the gap between humanity and artificial intelligence
The brain-computer theory narrows the gap between humanity and artificial intelligence and makes it seem quite reasonable that computer scientists will achieve Artificial General Intelligence (AGI) or Artificial Super Intelligence (ASI) within a certain time frame — a ‘singularity’ event that is closer or further away depending on who you ask. By comparing our organic brain with an artificially constructed program that attempts to mimic the internal architecture of our neural networks it seems quite logical to compare them side-by-side and imagine we could draw an evolutionary line between them.
Nobody before 1940 really considered the human brain in relation to computer systems, for the simple fact that complex computer systems did not exist. But once computers emerged, it proved a tantalizing analogy. It is often incorrectly stated that Alan Turing himself was convinced that humans are merely organic computers, but this is due to a misinterpretation of his work. In his unfinished 1958 book titled ‘The Computer and the Brain’, Mathematician John von Neumann stated that the human nervous system was “prima facie digital.” But this idea didn’t really catch fire until 2013 when the writer and futurist Ray Kurzweil rekindled von Neumann’s ideas in his book ‘How to Create a Mind: The Secret of Human Thought Revealed,’ where he basically told everyone they have a microprocessor inside their skull. Kurzweil is not a scientist; he is a speculator drawing interesting conclusions without any experimentation or evidence.
I’m guessing the average tech bro who becomes the founder of a successful startup reads more science fiction and futurism than hard science, so it’s no surprise to learn that many of them have cited Kurzweil as an influence.
The hard science exists to strongly refute the idea of the brain being a biological computer, and the theory is all but debunked. It would be easy, especially for anyone working in the tech industry, to make a leap of faith toward computation as the correct model, if only because a neat and tidy analogy is much more comforting and easy to process than acknowledging that we are made of different stuff that is still essentially a total mystery to modern science. Nobody likes the idea of a black box that isn’t running any visible software yet manages to control the human operating system (there, even I’m falling into the trap now), but it seems as if the brain isn’t going to give up its secrets so easily.
This is bad news to AI snake oil salesmen like Sam Altman, Bill Gates, or Jensen Huang, who need you to believe that brains are basically computers, that LLMs are an evolutionary step on the path to AGI, and that they are mere steps away from achieving their dream — because this is the narrative that justifies the extreme valuations of their AI projects and the popularity of over-priced shovels. Without true believers, they would find it harder to parade the horse in front of venture capitalists and governments in order to secure more funding for their possibly superfluous (and perhaps not even profitable) vision.
Gotta have faith
Every time a tech industry leader hints that AGI is possible, or suggests that it’ll happen in a few years if only they can secure enough data and processing power, they are inviting you to participate in a kind of belief system that stretches all the way back to ancient animism. They are pulling on strings in your subconscious mind that you might not even know are there. Do any of us really believe they are doing this for the greater good of our species? These people are monster capitalists and we’d do well to remember that, just as we should take note when AI industry leaders joke about annihilation.
AI companies increase the power of attraction by promoting features that make their chatbots seem more life-like, such as embedding the voice of Scarlett Johannson or producing high-production value but cringeworthy marketing videos touting chatGPT’s conversational abilities.
While watching such demo’s it is easy to imagine these chatbots as being possessed by a real personality, but it’s an illusion and there is really no sentience or even a thought process (as we know it) behind the intriguingly realistic responses produced by the machine. There’s even less intelligence behind the scenes than in the case of Clever Hans — at least Hans was clever enough to learn to pick up on visual cues from his master — an LLM needs to be fed gigantic mountains of data and then humans have to explicitly train it to recognize certain classifications of words and their meanings that match our patterns of speech according to statistical probabilities. There’s nothing ‘learned’ in the human sense.
Some of the most common ideas being explored by AI startups at the time of writing are:
Personal Assistants
AI therapists
AI tutors
All of which will rely on customers buying into anthropomorphization and all of which will open doors to risks that society is ill-prepared for. Some of the risks that have been identified so far include:
Gives the user the false impression that it is reasoning about something when it is really just making stochastic calculations.
Can give the impression that it is using empathy, or feeling emotions that it cannot really feel.
Seems so credible that false statements, inaccurate answers, and ‘hallucinations’ are accepted as truth or fact.
The risk of inherent bias in the AI model can be absorbed by the user, who has their mind or worldview re-shaped by those biases.
Oops
It doesn’t take a genius to realize some of the ways in which these risks could manifest in something as sensitive and critical as therapy or schooling. What are the long-term effects for individuals and society for a suicidal patient being given the wrong therapeutic advice or a student being misled by inaccurate answers or ideologies that stem from some inherent bias in the training model? One could argue that any human being in these roles could make the same mistakes, or act nefariously, but the difference is in how AI systems can cause harm on a much greater scale, with unprecedented efficiency and with far fewer guardrails and consequences for the damage they could cause, compared to one human acting alone.
Human-like machines are the perfect toy for a generation that grew up watching mostly endearing android robots on Star Wars and Star Trek
Human-like machines are the perfect toy for a generation that grew up watching mostly endearing android robots on Star Wars and Star Trek (we’ll conveniently forget about Terminator and Cybermen). It’s hard not to be excited by the prospect of being able to tell ourselves “We finally live in the future”. That’s why all the current startups will soon be surpassed by companies building physical robots for the home and workplace. But technology is moving far more rapidly than our ability to intellectualize and interrogate our fears and concerns or define new laws to protect us from unforeseen side effects.
There is much more to say on this topic and adjacent to it, but for now, I just want to underscore that there are reasons why we are all drawn to humanoid machines and that the people who stand to gain most by their introduction into our lives are exploiting those reasons and hoping we will be so entertained and delighted by the spectacle of artificial life that we will have our attention diverted away from the dangers and the lack of regulation and ethical restraint shown by those same corporations. As the AI hype cycle continues and more products hit the market before any law can be passed to regulate them, be on your guard and don’t fall too deeply into the illusion. This isn’t a harmless sideshow, and unlike the unwitting Wilhelm von Osten, the horse trainers (AI leaders) of today are conscious of the trick they are playing on society. The prize of AGI is no longer a scientific or even a non-profit endeavor — it’s a plan to overwhelm our senses with magical sleight of hand so that billionaires win more control over our lives and increase the wealth gap.
If you understand what I’m saying tap your hoof nine times.
Great post! Love the disclaimer too. I spoke of the need to avoid anthropomorphism in an event yesterday and had that moment when for whatever reason you just can't pronounce a word!
I have used Claude.ai and Venice.ai a bit. What I noticed after a while was their apparent lack of self-awareness. They did not evaluate their answers for inconsistencies or realize that they were heading off on wild goose chases. Such self awareness and self reflection are necessary prerequisites for real intelligence.
As for the brain as a computer, that never held water for me. I'm into practical metaphysics and my view is that mind is non-physical in nature and the brain is largely a transducer/interface between it and the body. Such things are currently impossible to prove to anyone other than to one's self via direct experience.