How I Learned to Stop Worrying and Love the AI
A much too lengthy critique of yet another techbro CEO manifesto.
I think it’s dangerous to view companies as unilaterally shaping the world, and dangerous to view practical technological goals in essentially religious terms.
A singular obsession
Dario Amodei (CEO of Anthropic) has a vision of the future. He comes right out and says he wants to “avoid “sci-fi” baggage”, but the title of his latest essay is "Machines of Loving Grace" — a nod to Richard Brautigan’s 1967 poem of the same name. Brautigan’s poem envisions a utopian future where humankind and machines have physically and spiritually merged, like a Ray Kurzweil wet dream. So we are clearly in science fiction territory. In fact, Brautigan’s poem alludes to a ‘cybernetic forest’ and ‘cybernetic ecology’ based on the concept of computational singularity, made popular by Kurzweil, and rendered as science fiction on the big screen in 2014. Anyone unfamiliar with 1960’s poetry might be forgiven for thinking the title evokes something they learned in Catechism or Madrasah, so I’m afraid religious connotations might also be unavoidable.
I’ll give some props and kudos to Amodei for writing a much more intelligent and nuanced essay than the immature and badly reasoned manifesto written by one of his industry peers last month. But in my opinion, much of what’s on display here is the same kind of pseudo-religious techno-utopianism that has become the default philosophy of big tech leaders and the excuse they often give for causing harm to the environment and society on the way to achieving their grandiose goals.
I invite you to judge Amodei’s essay for yourself, but be careful not to miss the (cybernetic) forest for the trees: he’s mixing a lot of ideas and metaphors without providing any real evidence or rationale for any of the predictions and assumptions he makes about the future of the tech his company and others are currently pursuing. Some of this we can put down to basic market forces: he’s the CEO of an AI company so of course he has a vested interest in putting forth an optimistic view of the role that artificial intelligence will play in our future, but I can’t help but sense a strong undercurrent of what AI ethics researcher Timnit Gebru referred to as TESCREAL fantasy. All the big tech leaders nowadays seem to subscribe to various pages torn from the same playbook. See also: Marc Andreesen’s Techno-Optimist Manfesto.
So in my opinion, this new piece from Amodei is more of the same preaching from the same pulpit. It’s more style than substance.
Manhattan Project 2.0
I want to highlight a few reactions I had when reading. First of all, I find it hard to skip over the fact that when talking about “powerful AI” (his preferred term over AGI or ASI: none of these techbros can agree on what to call it), Amodei brings our attention to the existential danger of the very systems he is portraying as our salvation. He says: “Many of the implications of powerful AI are adversarial or dangerous”. Indeed, in earlier publications, Anthropic have outlined different threat levels that various evolutionary stages of AI might pose, including:
“ASL-3 is the point at which AI models become operationally useful for catastrophic misuse.”
“ASL-4 represents an escalation of the catastrophic misuse risks from ASL-3, and also adds a new risk: concerns about autonomous AI systems that escape human control and pose a significant threat to society. Roughly, ASL-4 will be triggered when either AI systems become capable of autonomy at a near-human level, or become the main source in the world of at least one serious global security threat, such as bioweapons.”
If you’re the CEO of a company actively pursuing ‘powerful AI’ and you’re telling me there is a chance of “catastrophic misuse” if things go wrong, I’d really like to know why YOU get to make the decision that this is a technological innovation we want to pursue.
Seriously, Dario, who elected you to take on such an enormous risk? I’d ask the same question of Sam Altman, Mark Zuckerberg, Elon Musk, Sundar Pichai and any other CEO who runs a company with enough greenbacks to fill Michigan Stadium and access to enough compute to build their own foundation model. Besides your status, wealth, and corporate power, what makes you qualified to explore fringe science that could blow up in all our faces? What governance exists to make sure your experiments don’t run off the rails and kill us all?
At some point, all these tech leaders have referred to the chances of humanity being oppressed or annihilated by thinking machines that are not aligned with human desires and laws, so they are acknowledging that they believe they could be on the verge of creating something as dangerous as nuclear weapons, yet they are free to act alone as private enterprises with no government oversight or public transparency. How is this normal or acceptable? Shouldn’t they be classified as potential weapons manufacturers and a national security threat? Might a rogue program dreaming of electric sheep in Silicon Valley be more of a threat than a state like Iran enriching uranium? As far as I know, it’s not legal for a private citizen to build bombs or chemical agents in their garage, yet these techbros are essentially doing just that.
Is there a doctor in the house?
The first main section of the essay talks about how AI will somehow usher in a new age of extended lifespan and biomedical breakthroughs that will improve the quality of our lives. This is classic Extropianism, and something many billionaires are obsessed with. Instead of focusing on things we already know about what makes us sick and how to stay as healthy as possible, such as developing good eating habits, getting regular exercise, and taking on the more existential challenge of removing harmful toxins and pollutants from the environment, people of a TESCREAL persuasion prefer to believe that the elixir of life will be created through technological innovation, and here, Amodei postulates that a future super intelligence that is smarter than any PHD-possessing human will act as a:
virtual biologist who performs all the tasks biologists do, including designing and running experiments in the real world (by controlling lab robots or simply telling humans which experiments to run.
As far as crazy health ideas from techbros go, I suppose this is less cringe-worthy or horrifying than flatulence-causing liquid meals or vampiric blood transfusions from young people. But the idea of a machine that can control a research science lab and direct humans in their work seems particularly far-fetched. First, it would need to be predicated on the idea that we are anywhere near to birthing machines that can think and reason for themselves in a way that aligns with human thinking, which apparently we are not. Secondly, enabling any single machine to manifest physically and remotely control complex lab equipment and networked research computer systems around the world would require infrastructure and standards that don’t exist.
Realistically, such an undertaking would require a ‘New Deal’ level of collaboration between every single industry on earth
Realistically, such an undertaking would require a ‘New Deal’ level of collaboration between every single industry on earth to make it so that a single machine intelligence can traverse millions of software and hardware systems using a streamlined set of protocols and networks. At the moment, we can't even build a simple AI agent to book an airplane flight because there are no standard API's or other ways for an LLM to predictably navigate a web-based user interface! We have SUCH a long way to go before Dario's vision looks like anything more than a fairytale.
I would give him the benefit of the doubt and accept that he may be thinking of these things existing many decades into the future, but in the essay’s introduction he did write of the advent of powerful AI in terms of “I think it could come as early as 2026”. So I feel I’m justified in looking at his words through a critical lens.
Ironically, he says of biological sciences: “I think their rate of discovery could be increased by 10x or more if there were a lot more talented, creative researchers.” and yet somehow misses the entire point that, indeed, we could hire and train more human beings to do this type of work today rather than waiting for a robot unicorn to save us tomorrow. Why wait until we crack the code on simulating this level of intelligence, expertise, and instinct in machines when we already have plenty of human beings in the world who could perform these tasks right now if only they were given the right funding and support? It’s a ludicrous proposition that only a myopic tech-obsessed longtermist could make.
We could cure many cancers quicker if we diverted funds away from war machines to cure machines
In reality, the biggest constraint for medical breakthroughs in the present day continues to be financing and political will. We could cure many cancers quicker if we diverted funds away from war machines to cure machines. As one of my followers on LinkedIn recently pointed out, medical researchers would love to get their hands on more compute so that their deep-learning experiments can be conducted more efficiently. Here’s an outlandish idea: how about tech companies stop hoarding all the GPU’s and using up all the compute for maximizing their plagiarizing, unethical LLM’s and dreams of the future and instead divert some of it back to the people who can put it to good use in the here and now?
It’s not even clear how any of the landmark biomedical solutions Amodei lists in his essay — ranging from diet pills to longevity breakthroughs, to gene editing and body modifications —could be made any better or faster once we find ourselves in the presence of “powerful AI”. He points to various treatments and cures that already exist, invented by humans, but then simply states that “the rate of these advances will be similarly accelerated by AI”. Really? Without offering any actual evidence or even outlining a theory that directly implicates the use of AI in any of these scientific endeavors, he expects us to just accept it as a given that humanity on its own won’t be able to solve any more of these problems until intelligent machines come along to save us. Way to underestimate the abilities of your entire species, Dario!
Don’t get me wrong, I’m as frustrated as anyone with the slow pace of biomedical progress. I wish we had a cure for cancer yesterday. But where is the evidence that AI will somehow speed up this process and achieve results that armies of PHD’s have not yet achieved, somehow in a fraction of the time we’d expect?
Hobby horse
In the section on ‘Neuroscience and mind’, Amodei gets caught in a trap that many techbros in this nascent space fall into, namely the idea that the artificial neural net of a large language model is anything like the human mind. He isn’t the first to be enamored by this idea, and he won’t be the last. As I’ve written about before, computer-brain theories of mind are not new and are kind of tantalizing for a number of reasons, some of which go all the way back to our ancient beliefs. It’s easy to climb up on this hobby horse and get very distracted thinking about how everything we know so far about the inner workings of the human brain might map to the internal machinations of a computer. But this is a syllogism that probably points to a correlation that is only in our imagination.
Just like his conclusions regarding biomedical research, he suggests that AI can “accelerate” our understanding of the human brain and invent cures for mental illness. One of the examples he cites is Alpha Fold, which, criticism notwithstanding, is an existing technology that uses deep learning algorithms that would not necessarily improve if the machine had the ability to think and reason like a human.
Similarly, he cites recent uses of machine learning in computational neuroconstructivism — the study of how human neural systems interact. But again, why would this ability to see patterns in the firing of neurons be improved with the ability for a system to think like a human? He doesn’t draw any clear line from point A to point B. He also conjectures that AI might one day be responsible for “uncovering the real causes and dynamics of complex diseases like psychosis or mood disorders” without offering any tangible way that the mere statistical study of neurons firing at a microscopic level would shed light on behaviors that we already know are powerfully affected by ultra-nuanced, real-world antagonists that would be extremely difficult to reduce to computational algorithms: congenital disorders, psychological traumas, and environmental triggers like chemical agents, viral infections and toxins.
the idea that an AI can explain the origins of mental illness simply through pattern recognition just doesn’t add up.
I can see an argument for using the close monitoring of brain activity to produce better drugs that inhibit or help to control certain behaviors, but the idea that an AI can explain the origins of mental illness simply through pattern recognition just doesn’t add up. And if AI is to be credited with curing mental illness, then the supposition must be that mental illness can only be cured by psychotherapeutic drugs, which I’m guessing is not a terribly popular notion in the psychology sector. Of course, AI therapists are already on the scene and our man Dario might believe that patients will prefer talking to a machine that is just good at pretending to empathize, but I’m quite certain the long-term impact of such practices will lean negative and the side effects won’t be pretty.
Of course, any technoutopian essay would be remiss for not mentioning “mind uploading”, and this author does not let us down. Although he says it is likely “outside the 5-10 year window we are discussing”, he’s definitely a believer. This means we can check off the ‘transhumanism’ box on our TESCREAL bingo card.
The poor you will always have with you
It’s funny that Dario should begin his section on ‘Economic development and poverty’ by mentioning the low GDP of sub-Saharan Africa. Many big US tech companies have recently started building data centers in Africa that, just like the data centers that power LLM training in the US, use up vast amounts of power and fresh water. You can imagine how the extremely extractive nature of data centers is felt even more acutely in an area of drought and economic impoverishment such as Lagos.
To his credit, he does acknowledge that “I am not as confident that AI can address inequality and economic growth”, which is an equivocal way of saying “I think AI will make economic inequality worse, worse than it’s ever been”. He brings up the socialist calculation debate as something we should not touch with a barge pole, let alone an AI, because like all good capitalist-loving technocrats it must cause him great anxiety to consider that a super-intelligent AI could and maybe would see the simple logic in fairly distributing money and resources rather than allowing one percent of the population to hoard ninety percent of it.
He goes on to say that AI companies have a “moral imperative” to try to eradicate poverty, even though it is a very messy problem that is apparently caused primarily by corruption, and definitely not by big tech companies buying up all the land, not paying their fair share of taxes, using all the fresh water, and laying off hundreds of thousands of employees.
His plan for how to tackle poverty includes:
“AI-driven health benefits” — it is not clear what he means, but he cites things like anti-malaria drugs and making sure they are doled out to all countries that need them.
“AI-enabled economic decisions” powered by “AI finance ministers and central bankers” to replace the human institutions we have now — presumably on the basis that algorithms won’t themselves be fraudulent or biased (a very naive belief, given the history of algorithmic bias).
An “AI-driven (...) green revolution” — no mention of how AI will magically accelerate the use of renewable energy or sustainability practices.
AI-powered mitigations to climate change — maybe a transhuman Johnny Depp will use nanobots in rainwater to clean carbon from the atmosphere? I think I saw that in a movie once.
How soon before we see Dario nailing ‘wanted’ posters to lamposts and buildings like the warrants for ‘Ned Ludd’, only this time targeting people like Brian Merchant, Paris Marx, or Ed Zitron?
He rambles on for a bit more about wealth inequality without offering any tangible ways that AI could make a positive difference, then concludes this section with a nice little rant about people who are at risk of “opting out of AI-enabled benefits”, who he tarnishes as ‘anti-technology’ Luddites. This is akin to a Christian saying you will go to hell if you don’t accept Jesus into your life. This article isn’t about my stance on religion, I’m simply pointing out that what Dario is saying here is tantamount to a religious attitude toward tech, in keeping with the idea that he is a fully paid-up member of the cult of the singularity. He even says that such “Luddites” might be a threat to democracy. How soon before we see Dario nailing ‘wanted’ posters to lamposts and buildings like the warrants for ‘Ned Ludd’, only this time targeting people like Brian Merchant, Paris Marx, or Ed Zitron?
Peace sells, but who’s buying?
In the fourth section entitled ‘Peace and governance’, Amodei hits us with a real doozy:
Unfortunately, I see no strong reason to believe AI will preferentially or structurally advance democracy and peace
So, for all the talk of how AI is going to create a modern utopia where all humans will thrive, he’s reluctantly accepting that “technology may actually advantage authoritarianism”. He admits that “AI seems likely to enable much better propaganda and surveillance” and that whichever government wins the AGI race will “use AI to achieve robust military superiority”. All this seems pretty bleak! His suggested solution is something akin to Eisenhower’s ‘Atoms for peace’ — but just as in the case of nuclear proliferation, who gets to decide who is in the club and who isn’t? Who has the right to determine which nation-states can be trusted with a new superpower and which ones can’t? It’s a problem that isn’t solved by throwing more technology at it and has to do with the fundamental nature of human beings and our evolutionary propensity for solving conflict with extreme prejudice and violence.
Another idea he has is that to protect democracy we will have to militarize our “superior AI to win the information war” — which frankly sounds like the current war on reality that started around 2016, but on steroids — which I think is inevitable now that we exist in a world of deepfakes, but is an appalling price to pay for creating something whose benefits aren’t even very apparent.
I think I’ll leave it to others to critique his governance philosophy in greater detail, suffice to say that Dario tries his best to explain “the vision of AI as a guarantor of liberty, individual rights, and equality” by thinking about how AI could augment law practices or help with calculating aggregate democratic consensus on certain issues — perhaps interesting on a practical level and worth exploring, but tellingly doesn’t touch on how AI itself should or could be governed or regulated. And he never convinces us that AI can somehow win the day against the autocratic nightmare he outlined at the beginning. He sets AI up to fail, then offers a Pyrrhic victory, with consequences that would leave our society in ruins.
Snowcrash
Something I can relate to as a writer is that when you come to the end of a large piece of writing you really just want to wrap it up and get back to watching YouTube and eating tacos. So it’s no surprise to me that this utopian essay on AI ends with rough-hewn and almost nonsensical sentences like:
Most people are not the best in the world at anything, and it doesn’t seem to bother them particularly much.
At least he’s showing that he’s only human. He also admits that:
I spend plenty of time playing video games, swimming, walking around outside, and talking to friends, all of which generates zero economic value.
Which surely makes him vulnerable to the coming wave of automation layoffs as predicted by his own Chief Of Staff. Because from a capitalistic point of view, the dream of AI is a dream of 24/7 productivity and maximum profitability, which of course rules out humans, since we need to do things like eat, shit, and sleep (and play video games.)
Somehow, Amodei tries to reconcile the threat of AI replacing humans in most jobs with the idea that even though “AI will become so broadly effective and so cheap” that it does ninety or one hundred percent of the labor, it just won’t matter very much anymore — because the post AI world will be full of people who no longer want for anything and can spend their days in deep research or at leisure. But don’t forget: he earlier stated that AI is likely to bring about a world ruled by authoritarian values, with the possibility of AI-powered military conflicts on the horizon, and he’s not very confident that AI can end poverty. So there are a lot of loose ends to this conversation. He attempts to offer the same solutions we’ve heard from Sam Altman: universal basic income, and the even more bizarre idea of “universal compute” — supposedly in the future we will value our internet time as much as we value air or water, and an economic system will be built based on that need to plug-in to the datasphere.
This might make more sense if Zuckerberg hadn’t completely failed to create the Metaverse, which was meant to be a highly addictive, immersive virtual world like the ‘Oasis’ depicted in ‘Ready Player One’ that would have zuckered us all into a trance where we spend our Worldcoin tokens on skins for our virtual avatar while billionaires build hotels on Mars (or something.) But in light of that failure, I don’t understand how access to tiktok and reddit is going to be the driving force of our economic future. Seems like the Zuck did not come through with his critical piece of the puzzle. Now there will be nothing for the AI-displaced workers of the world to do. What will we even spend our UBI on that will still benefit the techbros of the one percent? The other singularity cult members must be so mad at him.
To conclude, what Dario Amodei has written in this essay gives us neither a plausible nor desirable vision of the future, but at most paints a very vague idea of some of the benefits that AI could provide, as well as giving us more reason to worry about all the ways nefarious human beings can and probably will use AI for ill. And even when he tries to summarize the advantages and capabilities of “powerful AI” he fails to illustrate why it matters that the machine develops human-level intelligence, since machine learning and predictive algorithms are already doing most of what he has envisioned for the future without any need for actual reasoning or creative problem-solving.
This doesn’t surprise me in the least, because as anyone who has been following me for a while will know, I’m quite certain that generative AI systems and LLM’s in general are more of a toy than a solution to anything. Or it’s a half-baked solution in search of a problem to solve. The problem with toys, though, is that they sometimes look like weapons, or can be wielded as if they are, and that can have real consequences, for both the bearer and the target.