I have spent so much time critiquing generative AI and its progenitors that very frequently, I have people come to me and say, “Careful Jim, don’t confuse genAI with predictive AI and machine learning — which are very useful and harmless technologies that shouldn’t be painted with the same brush.” But that’s not true. Long before the advent of chatGPT, other forms of AI and algorithmic programming have caused all manner of harm. There is no get-out-of-jail-free card for using AI that isn’t part of a large language model or diffusion model. Any technology can, and probably will be abused or neglected to the point of becoming dangerous at some point, and we shouldn’t gloss over it or become complacent to the risks and ethical concerns that are ever-present. We must evaluate the purpose and consequences of each piece of code on its own terms.
Since it’s Halloween I figured I would write about one of the monsters of the tech world…
Responsible US citizens, workers, and parents should be alarmed and sickened at the way tech billionaires like Mark Zuckerberg comport themselves in our society — reaping the rewards of wealth and lionized by adoring fanbois while their companies wreak havoc on cultures around the world. The positive tech media attention and investor love bestowed upon Zuckerberg is a whitewashing of the blood on his hands.
Note: this is a long read and will get cut off at some point if you’re reading in your email. Please open up the desktop site or app to read the whole thing. Thanks.
Line goes up
If you work in tech, marketing, or advertising, you may be especially inclined to see Mark Zuckerberg as a cultural hero or at least an impressive and successful businessman. If you’re an investor and own shares of Meta, you might not care what the company does so long as it continues to bring you juicy fat returns dripping with dividends: the company does have a nice financial chart that goes up and to the right for most of its existence as a public company.
Except for that downhill slump in 2022. Remember when Facebook died? Following a surprise total rebranding of the company from “Facebook” to “Meta” in October 2021, it took large hedge fund managers and other investors about a year to realize that the “Metaverse” was mostly dead on arrival and that “web3” was probably bullshit. This was a mini hype bubble that swelled and deflated all in the space of one year. Quite extraordinary when you consider that Facebook spent somewhere in the order of $46B on building what amounted to a shittier version of Roblox with the added obstacle of having to learn convoluted and obscure technologies like blockchain and crypto wallets.
(Just for a laugh, check out McKinsey’s pathetically childlike rendition of the metaverse and their continued belief that it’s going to generate $5 trillion by 2030, along with great quotes like “79% of consumers active on the metaverse have made a purchase“. That would be 79% of only 900 active monthly users. They have a very long way to go.)
But never fear, good old Mark breathed new life into the company in 2023 by rolling out an innovative new cost-cutting feature called The Year Of Efficiency which made the graph go up and to the right again in no time at all. He just had to adjust his security settings a little bit after unfriending a few people. He then surfed the generative AI wave all the way to the bank, even though Facebook as a business is still dead, Horizon Worlds is still worse than Second Life, and nothing has really changed.
Meta morphosis
Hypothesis: one of the reasons for changing the company name to Meta at the end of 2021 had nothing to do with the future of online community. It was simply a classic Public Relations 101 move of rebranding when your company name is being trashed a lot. Remember when Phillip Morris renamed to Altria Group? You catch my drift. Of course, I’m far from alone in drawing this conclusion. I hope this essay goes some way to ensuring Meta is still associated with the crimes of their past (although so far I’ve yet to see many Substack articles come up in a Google search, why is that?)
More recently, of course, Zuckerberg has had the same epiphany (or professional advice) about rebranding his own self-image, and the media can’t get enough of it. Quite honestly, the kids and techbros who idolize him today would not believe it if you told them he started out as a much cheaper looking simulacrum before becoming this 2024 looksmaxxing model complete with freshly mewed jawline, a gold chain, and sunglasses at night.

Why am I mocking Zuckerberg’s look? Isn’t that a cheap shot for an article that is supposed to be about more serious things? Absolutely. But recently I’m getting sick of seeing all the media coverage of his “new look”, along with viral memes about his ongoing spat with Elon Musk. Just like the Facebook renaming, all this personal rebranding is by design: Zuckerberg’s public relations team knows how to use it as a fog machine in our collective memory, obscuring the rearview mirror of his past transgressions, until we think of him as just another kooky celebrity rather than an actual boogeyman. Besides, things get a bit dark later and I wanted to make you laugh before I make you cry.
The problem with the celebrification of tech CEOs and founders is that it endows them with superhero or god-like status. This has a chilling effect on any seriously critical study of them as fallible and flawed human beings. The seemingly self-perpetuating multimedia reality distortion field works around the clock to distance companies like Meta from culpability and censure. Like Kara Swisher says, “History gets rewritten as hagiography.” (Burn Book, 2023)

Trouble at home
Don’t be fooled though. Zuckerberg is still the same guy who started Facesmash from his dorm room on the Harvard University campus with the aim of assigning ratings to women who would never date him. This is the same guy who once said that holocaust denial content should not be removed from Facebook, the same guy who called his user base “dumb fucks”, and the same guy who made his original mentor sad in 2019. He’s also the same dude called to testify before the Senate in 2018.
On his first field trip to Capitol Hill, Mark was mostly questioned by very old white men who barely understood how the internet works. The hearing that week was almost entirely focused on how Facebook had failed to disclose and mitigate the Russian influence campaign that helped elect Donald Trump and how they also failed to curb Cambridge Analytica. As if that isn’t a bad enough indictment of Facebook already, another topic did come up that I don’t recall getting much mass media attention here in the US: the country of Myanmar.
If, like me, you are bordering on geographically illiterate, I got you:
New ventures abroad
To cut a long history short, there has been a bloody conflict raging in Myanmar (formerly Burma) for many decades between the majority Buddhist population and a smaller enclave of Rohingya Muslims who live in Myanmar’s northern Rakhine State. The Rohingya are refused citizenship and are constantly persecuted and treated as non-humans by Buddhist overlords high on ethnic nationalism.
Today, Myanmar is governed by a brutal military junta and in a state of civil war, as it has practically always been since it was taken over by a military socialist coup in 1962. But back in 2011, it went through a brief renaissance of liberalism and restoration. During this time, it eschewed its former isolationism and held the first “democratic” elections in twenty years (in quotes because in hindsight it was only a faux democracy that didn’t last long.) It also opened up to modernization, including granting telecom companies the right to come in and set up new communication networks. Before long, the long arm of capitalism took hold of the country, which resulted in the kind of brand grab you might expect when a bunch of companies suddenly have access to a virgin market of 56 Million people.
In 2013, as part of their internet.org initiative, Meta worked with the newly established telecom companies to build out Myanmar’s internet infrastructure. Then they secured agreements with Chinese phone manufacturers to pre-load their “Free Basics” app package onto all the phones that would soon flood Myanmar. “Free Basics” of course included a stripped-down version of Facebook. In conjunction, Meta subsidized the wire data charges for anyone in the country who used their phone to access Facebook. This strategy greatly eased Facebook’s entry into Myanmar’s culture, where it quickly took hold and became the dominant source of news and messaging in the country.
Of course, none of this was altruistic. And as always, there’s no such thing as a free lunch. By being the default app on phones at the dawn of Myanmar’s new internet age, the idea was to gain an automatic foothold and first-mover advantage in a poor but developing nation, then sell ads to their globally minded partners and affiliates. Apparently, this made them a lot of money, despite the fact that the average wage in Myanmar was just $300 per month.
[Purely hypothetical aside: I do wonder if Marietje Schaake would agree that part of Meta’s plan in Myanmar might have been to play an oversized role in the future governance of what they at one time thought was a burgeoning democratic republic. After all, Meta’s earliest VC investor wants to start his own nation state.]
Radicalizing flywheel
So where is all this historical narrative leading? You may already know, but as stated earlier in this essay, I think it’s important to re-associate the newly minted Meta with the crimes of Facebook, if only for posterity.
In his 2022 book The Chaos Machine, The Inside Story of How Social Media Rewired Our Minds and Our World, (A book I recommend for a multitude of reasons) Max Fisher does an excellent job of detailing many of Meta’s sins, including how the introduction of a platform like Facebook to a society that was just one spark away from catching fire again was a very, very bad idea, and how the algorithms that drive Facebook users to certain types of content over time became a kind of flywheel for radicalizing the population, especially the Buddhist majority.
In 2012, a Buddhist monk called Ashin Wirathu, who once described himself as “the Burmese Bin Laden”, was released from prison after serving roughly 9 years for his role in the nationalist 969 movement, and wasted no time in getting right back into spurning a nationalist uprising in his country. By 2014, Wirathu had grown a very large following on Facebook and shared a post alleging that a Buddhist girl had been raped by a group of Rohingya men. It didn’t matter that the accusation was false or that the incident never even happened. Wirathu’s followers were enraged and organized attacks on Muslim villages, burning many to the ground and killing dozens of men, women, and children in what they regarded as totally justified anger in service of retribution. 130,000 people were displaced following the violence and forced into open-air detention camps.
Ashin Wirathu was someone who the Facebook algorithm had singled out and grown to national infamy. Not by conscious design, but through mysterious mathematical algorithms that have been poorly supervised and totally mismanaged by Meta. Between 2012 and 2017, Facebook’s recommendation engine funneled millions of user sessions to Wirathu’s Facebook page, building him an audience that he never could have achieved in his wildest dreams but for the lines of programming code that decides who sees what on the platform.
It gets worse
Ashin Wirathu may have been the loudest voice on Facebook that incited violence, but he certainly wasn’t alone. Between 2012 and 2017, the algorithm spawned thousands of personal accounts, pages, and private groups dedicated to the othering and dehumanization of the Rohingya population and calling for them to be ethnically cleansed. This in a country where 46 million mobile SIM cards were purchased in a country of 56 million, and where Facebook dominated all news and communication.
In August of 2017. A Rohingya militia fought back by ambushing some military installations in the north of the region, killing 12 Buddhist soldiers. The response was a massive and disproportionate military campaign of genocidal collective punishment (remind you of anything?) that left hundreds of Rohingya villages in ruins, 700,000 Muslims displaced and thousands dead and left to rot in mass graves (this is hard to read, fair warning.)
Hand-wringing
By the time Zuckerberg was called to face the senate in 2018 the situation in Myanmar was horrific, yet according to the transcript he only had to face one question about it. Senator Leahy’s comment (while presented with an example of some related hate speech found on Facebook):
You know, six months ago, I asked your general counsel about Facebook's role as a breeding ground for hate speech against Rohingya refugees. Recently, U.N. investigators blamed Facebook for playing a role in inciting possible genocide in Myanmar. And there has been genocide there.
You say you use A.I. to find this. This is the type of content I'm referring to. It calls for the death of a Muslim journalist. Now, that threat went straight through your detection systems, it spread very quickly, and then it took attempt after attempt after attempt, and the involvement of civil society groups, to get you to remove it.
Why couldn't it be removed within 24 hours?
To which Zuckerberg replied that Meta would
Hire more Burmese-language content reviewers.
Work with “civil society in Myanmar” to “identify specific hate figures so we can take down their accounts.”
Stand up a product team to do “specific product changes in Myanmar” that would “prevent this from happening.”
Digital MSG
All of this was a deflection from the more critical question of how a genocidal instigator like Ashin Wirathu became so influential on Facebook in the first place or how so many people had been exposed to his content and offered no concrete commitment to curing the platform rather than treating the symptoms of the platform gone awry. Not to mention, the question regarding Meta’s AI moderation capabilities is completely ignored.
The real problem is at the core of Facebook: the content algorithm. It’s almost the entire reason that Meta exists and is central to their business model. For the average Facebook user, you don’t even know the algorithm is there, but it’s ever-present. You think by clicking on certain posts or videos you’re exercising your agency and free will, but behind the scenes, that algorithm is learning, planning, and training you to consume more and more content. The more content you consume, the more ads you’ll see, which translates to profit for Meta. It’s a mind-meld money machine.
It does this by presenting you with content that will capture your attention, based not so much on your previous likes and interests, but on a complex set of calculations that have morphed over time to become a recommendation engine on steroids. You might think a program that knows what you should watch next would be a good thing, especially for infinite channel clickers like myself, but as Fisher and others have explained, these algorithms have become — for reasons not fully understood even by the data scientists who made them — machines of addiction that end up curating the most controversial and inflammatory content it can find, serving it to you in a way that is so gradual and subtle that you probably won’t even be aware that everything you read and watch on the platform is laced with the social media equivalent of MSG.
You may start out just inquisitive about a certain topic and wanting to learn more about it, but the algorithm has a way of funneling you down more and more rabbit holes until you end up somewhere unexpected: looking at content that is disturbing yet just convincing enough that you might be hooked and decide to keep going.
Misadventures in mismanagement
In 2021, a military junta staged a coup on the Myanmar government, overthrew them, and then proceeded to wage all-out warfare against the Rohingya people, with even more villages burned to the ground and tens of thousands more dead.
Later that year Meta was the subject of yet another senate hearing after former Meta employee and whistleblower Frances Haugen came forward with documents exposing Meta’s internally acknowledged lack of control over what happens on their platform. This time, the main topics under scrutiny were teenage mental health and election disinformation leading up to the January 6th incident, but Haugen did explain how the Facebook algorithm works and how machine learning is used to design a better and better content recommendation mouse trap for the user, leading to the proliferation of the most engaging content — which happens to be hate speech and disinformation. She shared a 2019 Facebook internal memo where they admitted:
We also have compelling evidence that our core product mechanics, such as vitality, recommendations, and optimizing for engagement, are a significant part of why these types of speech flourish on the platform
Haugen talked about how Facebook engineers have bolted on so many different variations and upgrades to the algorithm over the years that it’s not really one core algorithm anymore but an unwieldy hydra with many heads. They’ve basically lost control over it as it increases in size and complexity. What’s more, she revealed that contrary to Zuckerberg’s promises in the 2018 senate hearing, Meta didn’t spend any time or money hiring more Burmese language experts and didn’t prioritize any kind of mitigation in Myanmar. They weren’t even operating like a globally optimized company:
87% of the spending on combating misinformation at Facebook is spent on English content when only 9% of users are English speakers.
In other words, they’ve known all about the problem for years and have done nothing to deal with it. It’s a gnarly problem that would be hard to fix. But even if Meta did have the ability to “fix” their algorithms I don’t think they would, because the algorithms work. Getting eyeballs stuck on their app while a user is doomscrolling their black mirror is their whole bread and butter. If they reverse-engineered all their machine learning to find out why it favored hate speech and disinformation and then stripped those parts out, who knows, perhaps that would result in far fewer eyeballs getting stuck and a sharp drop in the number of dollars landing in Zuck’s bank account. They’re not going to mess with it.
By all accounts, Meta seems to have gotten away with it. They likely won’t be held accountable. A class action lawsuit filed on behalf of the Rohingya people was dismissed, and in a comprehensive write-up of the whole situation by the Systemic Justice Journal they explained:
No existing legal mechanisms could plausibly be used to hold Facebook accountable, reflecting a longstanding gap in international law when it comes to punishing corporate malfeasance.
Damn these corporations and the systems that give them carte blanch to do things that would land any ordinary individual in jail. Amnesty International has called for reparations, but I won’t hold my breath.
In her testimony, Haugen referenced ethnic conflicts in Ethiopia where Facebook almost certainly played a role in stirring up the population through virulent hoaxes and memes that incited violence. In The Chaos Machine, Max Fisher references similar evidence of social unrest and hate groups activated by the Facebook platform in Sri Lanka, Mexico, Nigeria, Germany, and Austria. In terms of the lack of responsibility felt by Zuckerberg when confronted with these problems, Fisher concludes:
Some combination of ideology, greed, and the technological opacity of complex machine-learning blinds executives from seeing their creations in their entirety. The machines are, in the ways that matter, essentially ungoverned.
This is a worldwide problem that continues to proliferate even as I’m writing this essay in 2024. Most recently the Facebook propaganda flywheel worked its dark magic in the north of England, inciting anti-immigration riots.
Net negative
The recommendation algorithm isn’t the only way that people become radicalized or brainwashed on Facebook. It’s just the method I wanted to highlight because it is so pertinent to the how the Myanmar situation seemed to unravel, according to people much smarter than me. Of course there is also the possibility that the level of hate speech and misinformation that gets seen on the platform is made exponentially worse through social amplification: users of their own accord choosing to share information with others or invite others into private groups. Not everyone got pulled into the machine by invisible hands. But that doesn’t mean the invisible hands are not there, or that they didn’t light the first match.
Likewise, genocidal conflicts are obviously not the only type of harm Meta bears some responsibility for, but I think it’s the most extreme example and one that I wanted to highlight. Meta’s sins have already filled several books that have been published over the past decade. And yet that financial chart keeps going up and to the right.
What I’ve been wondering about recently: is Facebook more like a gun manufacturer or more like a cigarette company?
From my research, it seems like there are a number of lawmakers, pundits, and journalists who say that Facebook is really more like Colt, the manufacturer of the AR-15 assault rifle. Although AR-15’s are commonly the weapon of choice used in horrendous acts of domestic terrorism in America, it’s possible to argue that Colt bears zero responsibility because they have no control over how their products are used and do not provide instructions on how to commit crimes with their product. Personally, I still can’t fathom what a company like Colt is doing to better the world, but okay, let’s say that argument holds some logic, at least.
On the other hand, it’s easier for me to think of Facebook/Meta as Phillip Morris/Altria, not just because of the strategic name change, but because Altria knows and knew for a long time that their product was addictive and did harm, that it was designed a certain way to maximize customers at grave risk to their health and they did nothing to remedy the problem — because the problem itself was key to their success. In the same way, Meta has known all along that its algorithms are harmful, has attempted to hide this fact, and seemingly continued to do very little to mitigate the harm that it causes around the world. And again, the algorithm is the thing that makes Facebook so addictive and makes Meta so profitable. So I think the comparison fits.
This means that despite Zuckerberg’s lofty and idealistic goals of bringing the societies of the world together, he’s succeeded in the exact opposite, and as far as I can tell, his business is a net negative for the human race.
Dude, this blew my socks off! You gotta turn this article into a YouTube video.
Knowing Meta/Facebook is bad is one thing, but it's a whole other level to realize that it has actually fueled and promoted genocide. Thanks for the great article. I have bookmarked it.
BTW, I saw what you did on the emails! Saying the article will get cut off because of length is a great way to get people to come here where they can actually upvote it.