The Tsunami is Coming
If animal advocates don’t change course, we risk all our hard work being swept away.
This post is meant to help animal advocates start thinking about how the artificial intelligence revolution will impact our work. If you’ve been feeling anxious about AI but weren’t sure where to begin, or if you’ve never considered that AI technology could be disruptive to animal advocacy, you’re in the right place.
You can listen to an audio version here or by searching sandcastles tsunami in any podcast platform (if you do, follow the podcast for more posts coming soon!)
Table of Contents
Part 3: It’s Intelligence, Stupid
Part 4: The Turning Point of History
Part 1: Building Sandcastles
Working for the future
“We can evade reality, but we cannot evade the consequences of evading reality.” – Ayn Rand
At the 2015 National Animal Rights Conference in Washington, D.C., Wayne Hsiung made waves by unveiling Direct Action Everywhere’s “40-Year Strategic Roadmap to Animal Liberation.” At a time when most other leaders in the movement thought we might be able to transition the egg industry to 100% cage-free by the end of the century, Wayne’s ambition was deliberately provocative.
The document was a series of 5-year milestones, imagining a domino effect leading to a vegan world (or at least USA) by 2055. Calling it a strategic roadmap wasn’t totally accurate– the document lacked any prescriptive instructions for how to make this vision happen. But that didn’t stop it from inspiring hundreds of activists inside and outside of DxE to start working towards transformational change on a shorter timeline than most people had dared dream of previously. That included yours truly, who happened to be in the audience as a fresh-faced university student encountering the animal rights movement for the first time.
DxE’s roadmap invited animal activists to dream big, and many accepted the invitation. In the decade since, more activists have pursued high risk, high reward strategies in hopes of bringing about the kind of transformational change Wayne prophesied. Even advocates pursuing incremental goals like corporate cage-free commitments (which are already a long-term strategy insofar as they often take close to a decade to mature) have come under greater pressure to explain how their efforts will help lay the groundwork for more transformational change, and on what timeline.
One way or another, if you ask most people in the animal movement, they’ll tell you they are toiling away for a vision of the world that is not yet within reach. That their efforts today are meant to create impact in the future.
All of those future plans have now fallen under a shadow.
When I think about animal advocates today, the first image that comes to mind is that we are kids playing on the beach, meticulously constructing what promises to be the biggest, best sand castle EVER, oblivious to the enormous tsunami bearing down on us and promising to sweep away all of our progress (and us along with it.)
The wave is digital intelligence. In this post, we’re finally going to look over our shoulder and stare directly at it.
You may already notice some resistance to where this is going. After all, this is not wholly unfamiliar territory for you. Wherever you get your news, you’ve gotten used to bold proclamations about AI changing the world, or conversely, about the AI bubble bursting. In the three short years since ChatGPT was first released publicly, AI technology has been subsumed into the partisan culture war. The political left has decried theft of intellectual property, fretted over inflated stories of environmental impact, and sought regulations to block replacement of workers, all while dismissing the underlying technology as another overhyped fad pushed by libertarian tech bros. The right has fixated on “woke AI,” conflating catastrophic risk mitigation with ideological bias and overall seeking to prevent any sort of government regulation.
These mainstream debates have obscured the most important questions: what these systems actually are, how they differ from other technologies, and where we should expect them to lead us, for better or worse. Among the already small segment of people grappling with those questions, far too few are asking about the implications for animals. That’s why I’m writing this, and more importantly, why you’re reading it.
I’ll start with the case for why you should expect artificial intelligence technology to turn the world on its head in a short amount of time, possibly as little as five years. This isn’t a question of what should happen or what we want to happen, but what is happening, and why it matters to animals. Part 2 explains what you need to know about what this technology is and how it works. Part 3 catches you up to how far the tech has advanced as of October 2025. Part 4 weighs some historical analogies before imagining several different ways transformative AI could play out in the near future. In case any of these sections leave you wanting more, I will recommend some of the articles and podcasts I found most helpful on my own learning journey.
All of this works up to Part 5, where we’ll start to tackle the question “What should animal advocates be doing differently if we expect AI to transform the world soon?” As we will see, there is not a single answer. My main goal is to inspire a great deal more energy to be put towards generating many different answers. By the end of the post, you should feel prepared to start grappling with the impact of AI yourself, no matter where you were when you started reading.
Facing the wave
A notification pops into your news feed:
“Campaigners Celebrate as Last Factory Farm Shuts its Doors”
With a flick of your eyes, you open the article and begin skimming:
The mood was celebratory as hundreds of people gathered at the edge of a long row of barns on the outskirts of Smithfield, North Carolina. Just ten years ago, industrial farms like this were the face of the American pork industry. Today, as the last barns were emptied, they became monuments to a food system that officially belongs to the history books.
“I’m just grateful I lived to see it,” said Anaya Vasquez, executive director of American Voters for Animals. “After decades of tireless work from animal advocates, Americans chose compassion. This is a historic moment in terms of our country living up to its ideals.”
But the farmers told a different story. The most recent owners of the farm, private equity firm Blackstone, declined to comment. But we tracked down Kraig Westerbeek, former President of Hog Production for the now-defunct Smithfield Foods, the subsidiary of Hong Kong-based WH Group that operated the farm and hundreds of others like it until filing for bankruptcy in 2034.
“We just couldn’t compete with the lab grown stuff,” Westerbeek explained. “We got on the AI optimization train early, which is why we were able to outlast so many other producers. But the math was against us.”
After Google’s DeepMind lab released their AlphaMeat R&D model in 2027, it quickly swept away the technological barriers to cultivated meat. Once cultivated pork crossed the price-parity threshold with factory farmed pork in 2029, the Vance administration came under intense pressure from their own base to lift the sales ban on cultivated meat products…
At the same time, in a different branch of reality, a different copy of you sees a different notification:
“Smithfield Inks $100 Trillion Deal with Alpha Centauri Corp. on New Meat Satellite”
Construction is set to begin next year on a new orbital factory after a major investment from the Earth-based firm Smithfield Foods. The facility will be the largest space-based farming operation to date– and the first to operate entirely without human labor. The design prompt requested over 800 million tons of pork per year once fully operational.
“This deal will guarantee protein security for the growth of the colony for 15 years or more,” stated a triumphant press release from the Calorie Department.
But not everyone was celebrating. A small group of picketers gathered outside the Executive Administration, taking turns rotating through the designated speech zone to share their messages as heavily armed security drones looked on. “5 billion pigs a year: this is literally the death star,” complained one sign…
These two scenarios have something in common: there are people right now investing serious sums of money to bring them to life. If there is a chance either of them could come true, then the most important question in the world for animal advocates becomes clear: how can we steer towards one future and away from the other?
But then, you might think, “Well, shit, I don’t know. I’m just an animal rights person. I don’t know anything about machine learning or AI.”
The future belongs to the AI-literate
That’s exactly the position I was in 18 months ago. I’d spent a decade as a campaigner in the animal movement, and I had never written a line of code. I was a regular user of ChatGPT, turning to it for help with everything from writing social media ads to deciding what to plant in my garden. It seemed like a useful if somewhat unreliable tool, and nothing more. The only thing I knew about AI safety was that Open Philanthropy and other EA-influenced funders were spending a bunch of money on that instead of on helping animals, so naturally, I thought it was a total scam.
But, then: in the course of some casual podcast listening, I gradually came across information that I found more and more alarming. It wasn’t any one thing. To understand the economic arguments, I found I had to learn about the underlying technology, which in turn pointed to a dizzying web of unresolved debates from science, philosophy, and history. In my free time between working on campaigns, I kept digging, mostly by listening to podcasts and audiobooks while walking my housemate’s dog.
Slowly, the pieces came together, and a complete picture emerged:
Even if all you care about is helping animals, Artificial Intelligence is the most important thing happening in the world.
And it is happening very, very fast.
We need more people in the animal movement to become AI-literate, at every level. If you want to become an expert on AI and animals, that would be great, and there’s room for any number of people to build a career around this. But even if you just want to excel in your current role, there’s nothing more important right now than developing a solid working understanding of the unfolding AI revolution.
I’m about to introduce you to the key information you need in order to think independently about what AI means for your work and for the animal movement more widely. It will be accessible no matter your technical background– remember, I’m the furthest thing from a computer scientist. If I can understand it, you can too.
AI has implications for nearly every corner of society, and the field is progressing faster than any previous technology. There is far more than any one person can track, and more than one article can summarize. I’ve done my best to distill down the information that will be most relevant for animal advocates. It’s still a lot to get through. There might be moments throughout where you wonder what any of this has to do with helping animals. I promise it all comes together by the end, so in those moments, think back to the worlds that created those two news stories above. It could come down to your effort whether we end up in the first world or the second.
Part 2: Let’s Get Digital
Trends Hold, Until They Don’t
“If something cannot go on forever, it will stop.” – Herbert Stein
The trouble with working to create a brighter future is that it requires first making predictions about the future, which is hard. (This turns out to be an important axiom I will return to in future posts: every decision starts with a prediction about where events are leading and how they would be altered if we took a certain action.)
So how do we go about predicting the future enough to alter its course? Let’s start by clarifying what we want to predict. One obvious area of interest is how artificial intelligence technology could impact animals and agriculture directly. Will AI solve the technological barriers to cultivated meat, finally realizing the dream of making cheap meat without animals? Or will it be used to make factory farms even more efficient, and spread them beyond earth as humans colonize other planets? Or, stranger still, could it crack the code of nonhuman animal languages so that the public can finally hear directly from animals that they don’t want to be factory farmed?
But it would be a mistake for animal advocates to take too narrow an interest in AI. Focusing on these individual applications risks missing the forest for the trees. It is very possible that most of AI’s impact on animals, good or bad, will come from larger economic disruption. Over the last century, macroeconomics has been the main determinant of how many animals are born into lives of exploitation, and how horrific those lives are. For example, the emergence of a large middle class in China has driven huge growth in worldwide demand for factory farmed meat and eggs, more than offsetting all the progress that activists in the West have made on reducing intensive confinement.
So we also want to ask how AI might transform the global economy of the future. There’s one scenario we can easily rule out right away: that it will look just like the present. Everyone who has ever predicted this has been proven wrong eventually, and on a global level, the pace of that “eventually” has been rapidly accelerating, mostly thanks to new technologies. Clearly the safe bet is to expect dramatic change.
The gradual change story
But wait– is that true? We’ve heard lots of dramatic predictions about the future before, and most of them turn out to be overblown. Sure, the details always change, but if we look on a deeper level, we can identify laws that persist. For instance, since long before the industrial revolution, the human economy has been shaped by the laws of supply and demand, and by the Malthusian trap, the tendency for human populations to always press up against the limits of food production, keeping most people perpetually on the brink of starvation.
At the dawn of the industrial revolution, more than 80% of people made their living in agriculture. In industrialized societies today, that number is less than 3%. If you could rewind to 1800 and warn a pre-industrial farmer that nearly all agricultural work would be rendered obsolete by technology, they might have assumed this would lead to mass unemployment, or alternately, to a post-work society where everyone would have plenty to eat and would spend their time on leisure. Few people would have imagined anything like the world that emerged in the 20th century, with the vast majority of the workforce shifting into careers in entirely new industries. Yet in important ways, this world is more similar to theirs than what they might have expected: the vast majority of people still work for a living, and overall wealth inequality today is close to what it was then.
Artificial Intelligence is a powerful new technology. But by now, we are used to powerful new technologies. We’ve seen them play out again and again, from the steam engine to the internet. We know what to expect.
That’s one story that we could tell about AI. It makes a prediction, which is just what we’re looking for. This is the story favored by the world’s leading economists. Some workers will become more productive (and therefore more wealthy), some jobs will be automated entirely, and new industries will emerge to absorb those workers. Our society will churn along following the same macroeconomic laws, just with fancier toys and better medical treatments.
Turning the world on its head
Now for a different story:
“The development of full artificial intelligence could spell the end of the human race.” – Stephen Hawking, a pretty smart guy
“What we’re doing is creating alien beings… We should be very worried about this and we should be urgently doing research on how to prevent them taking over.” – Geoffrey Hinton, Nobel laureate known as the ‘godfather of AI,’
“None of the current advanced AI systems are demonstrably safe against the risk of loss of control to a misaligned AI… If those in control of AI do not understand and manage this risk, it could jeopardize all of humanity.” Yoshua Bengio, AI pioneer and the most cited researcher across all scientific fields
“We should prepare for, but not fear, the inevitable succession from humanity to AI.” – Dr. Richard Sutton, AI pioneer and Turing Award winner
“[Creating human-level AI] would be the biggest event in human history and perhaps the last event in human history.” – Stuart Russel, AI pioneer and professor of computer science at UC Berkeley
“The vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.” – website of OpenAI, creator of ChatGPT and arguably the research lab closest to creating superintelligence
[On the probability of human extinction from AI] “Maybe 5%, maybe 50%. I don’t think anybody has a good estimate of this.” – Shane Legg, co-founder of Google’s AI lab DeepMind
“There’s a 25% chance that things go really, really badly.” – Dario Amodei, CEO of Anthropic (creator of Claude)
“I think it’ll be good, most likely it’ll be good … But I’ve somewhat reconciled myself to the fact that even if it wasn’t going to be good, I’d at least like to be alive to see it happen.” Elon Musk, CEO of xAI (creator of Grok a.k.a. MechaHitler)
Well that’s… different. New technologies in the past have prompted warnings of lost jobs or even widespread harm, but the creators of a technology usually don’t shout about the risk of it rising up and taking over, while actively working to create it. Surely they’ve just been watching too many sci-fi movies?
That’s part of it, but there’s also graphs like this one:
What we have here is a curvy line. It looks like any other exponentially curvy line you’ve seen before. It turns out that in the field of machine learning (ML for short), there have been a lot of curvy lines like this one coming out lately. This one depicts the human-benched time horizon of software engineering tasks that frontier AI models can complete autonomously. If all that was Greek to you, don’t worry. This happens to be a particular curvy line that most ML experts consider especially important, and we’ll come back to it later, but for our current purposes, there are dozens of other lines that would have served just as well: top models’ accuracy on PhD-level exam questions, the amount and cost of compute being used in AI development, the list goes on.
What has the second group of nerds (AI researchers) spooked is that if these lines were to continue curving for much longer, it would seem likely to break some of the trends that the first group of nerds (economists) is relying on. That’s the thing about change throughout history: trends hold, until they don’t. And usually, when they stop holding, it’s because some other trend turns out to be more important, and eats them.
Thus far, we’ve seen two of humanity’s favorite ways to make predictions: by historical analogy (this change will be like that one was) and by extrapolating trends (this line will continue as-is, whether linear or exponential). It turns out there are some of each that would suggest that AI will either be an overhyped tech bubble or the end of life on earth. The art of forecasting is in surmising which trends and which analogies are valid, and which are about to get eaten.

To determine this for the question at hand, we unfortunately need some technical understanding of what Artificial Intelligence actually is.
Grown, not built: what animal advocates need to know about how AI works
“Computers don’t do what you want them to do; they do what you tell them to do.” – software proverb
This is the most technical section of our AI journey; it only gets easier and more fun from here. I promise you won’t regret it. The 21st century will belong to people who take the time to develop a working understanding of this stuff. Animals need you to be one of those people. So let’s dive in.
Every computer program you’ve ever used was created the same way. A software engineer, or team of engineers, sat down and wrote out line by line exactly what the program should do with any given expected input. Logical if/then statements dictate what will happen if you click this button or that button, and mathematical formulas process and transform data in a fixed, predictable manner. If something goes wrong, the engineer could look inside the code, understand what each line was doing, and find where an error was instructing the computer to do something incorrectly. And if you try to give the program an input outside the range that the designers gave it the ability to process, nothing will happen. There’s nothing mysterious or spontaneous going on.
This describes every computer program you’d ever used– until modern machine learning (ML). When you ask a question to ChatGPT, you are not interfacing with a deterministic set of instructions coded by a human engineer. You’re having a conversation with a digital brain surprisingly analogous to your own. And the evolutionary process that created it is surprisingly similar to yours as well.
ChatGPT and Claude belong to a subclass of intelligence models called Large Language Models (LLMs), themselves a type of Neural Network. Neural Networks are so called because they are inspired by the architecture of your brain. Just like your brain, neural nets are composed of many rudimentary units, neurons, that become powerful from the way they are connected together. (Indeed, diehard ML theorists might argue that your brain is also a Neural Network.) The neurons in your brain are cells, which is fitting since you’re an organic life form made entirely of cells. ChatGPT, of course, is not made of cells. Its neurons are a different type of thing more appropriate to being a digital intelligence: a set of numbers called parameters.
Each LLM is a vast set of parameters. Each parameter is simply an 8-digit number stored at a specific position in a high-dimensional matrix. Just like animal brains, digital neural nets can contain a larger or smaller number of neurons, with more neurons generally enabling broader capabilities. Some small AI models can outperform humans at narrow tasks like chess with tens of millions of parameters, while frontier LLMs are quickly approaching trillions of parameters each.
While considering a response to a question, AI systems are no more aware of their neurons than you are of your own. They couldn’t tell you what any of the numbers are because they genuinely don’t know. One illustration of this is how bad they can be at simple arithmetic. We are used to thinking computers are good at math, but neural nets aren’t like other programs that run on computers. LLMs often need access to a calculator to do math you could do in your head.
Training
You may have heard talk about training LLMs. Training is how LLMs become smart, and it is sort of like a highly accelerated version of the evolutionary process that created your own intelligent brain. Before training, an LLM starts out with a fixed number of parameters, each initialized to (nearly) random values. Then the model is exposed to evolutionary pressure. In the human evolutionary process, the single outcome we were optimized for was reproducing successful offspring. If a human passed on the genes encoding their brain, more brains like theirs would exist.
For LLMs like ChatGPT, the most important evolutionary test is predicting the next word on the internet. The giant list of parameters that will become the next ChatGPT model is fed billions and billions of web pages, shown the first ~50 words, and asked to predict the 51st word, then the 52nd, and so on. It is given no more instruction than that, just as early organic life forms were not given any hints.
If the neurons in your brain were wired randomly, you probably wouldn’t be very good at predicting words, and neither is baby GPT. It fails terribly. Each time it fails to predict a word, it tries modifying its parameters in whatever way it hopes will reduce that failure. (This process is actually more targeted than random gene variation, representing one of many improvements ML has made over organic evolution.)
This process, called pre-training, is repeated over and over on trillions of words. In later stages of training, the model is also fed with more complex problems like math, logic, and software engineering, and rewarded for arriving at the correct answer; it can also be given subjective tasks that are evaluated by human reviewers. Together, these are called post-training or reinforcement learning. Pre-training and post-training are misleading labels that really mean early and late training.
The first generation of GPT was released in 2017, using an early version of this approach. Since then, there have been continuous improvements to the algorithms used to train models. But the improvement in these models’ capabilities is overwhelmingly due to increases in data and compute: that is, the amount of text they practice predicting (measured in tokens which are each about one short word) and the effective computer processing time they were given to do it. Both data and compute have scaled up by about two orders of magnitude for each new generation of GPT, with a comparable increase in cost.
Internal representation
If you throw enough data and compute into a model, even before reinforcement learning is applied, something extraordinary happens:
It turns out that if you spend enough time trying to predict the next word on the internet and iteratively evolving your brain to succeed at that task, you learn a lot more than linguistic patterns.
You can’t hope to predict the next word on a physics paper just by understanding sentence structures. You have to actually develop a working model of physics, an internal representation of the physical world in which that paper was created. The same goes for other fields of science, but also for poetry, philosophy, politics, and every other topic humans have written about online. This key breakthrough, known as the scaling paradigm, led to the current generation of artificial intelligence, and it continues to power a rapid increase in what leading models can do: with enough data and compute, neural networks can develop authentic understanding of the world, and the more data and compute, the deeper the understanding.
On one hand, this shouldn’t be that surprising. This was the same process that taught an octopus how to survive in the ocean: what to eat, what to flee from, where to hide, how to build a nest. Octopi hatch alone and grow up without parental oversight; their intelligence and much of their knowledge was shaped by selection pressure acting over millions of generations.
LLMs like ChatGPT learn in a similar way. They are grown, not built: fed massive amounts of data and subjected to a process akin to evolutionary pressure, iterating through countless different ways of organizing their digital neurons until they gradually land on a neural network that maps meaningfully well to the real world, with conceptual models of everything from particle physics to normative ethics. They can and do use these mental models to solve novel problems from the frontier of math, virology, and other fields, problems whose answers never could have existed in their training set.
Superhuman intuition
One of the most famous examples, and most relevant to our concerns as animal advocates, is AlphaFold. AlphaFold is a neural network trained similarly to LLMs, but instead of being fed a broad set of web pages on every imaginable topic, its creators at Google’s AI lab DeepMind fed it a narrow dataset of 3D protein structures and the genetic sequences that encode for them. (Narrow doesn’t mean small; it included hundreds of millions of DNA sequences.) They were trying to solve a problem in molecular biology: the ability to predict the behavior of a protein based on the genetic sequence coding for it, or in reverse, to design a genetic code for a specific hypothetical protein. The protein folding problem, as it was known, was considered key to unlocking a new generation of biomedical breakthroughs, but it confounded human scientists for decades. Countless teams tried writing computer programs that would be able to crack it.
Then AlphaFold came along. Instead of being written, AlphaFold was trained. Just like ChatGPT, baby AlphaFold was useless for predicting protein structures. But as it tried over and over again, billions of times over millions of proteins, it had a series of breakthroughs. The popular word in ML for the moment a neural network gains a flash of insight and starts solving a specific problem is grokking. In 2020, 2 years before ChatGPT, AlphaFold grokked the protein folding problem, joining the ranks of AI models that vastly outstrip human intelligence in a narrow domain, or narrow superintelligence.
Today, researchers can put any gene sequence into AlphaFold and it will spit out the 3D structure of the resulting protein, with near-perfect accuracy. That’s not because AlphaFold has a library of every possible protein stored in its parameters. Instead, it has developed an intuition for folding proteins, based on spending the computational equivalent of millions of years practicing. It’s similar to your own intuition for catching a ball, or noticing if someone is lying to you. You don’t do quick math in your head calculating the trajectory of the ball, and even if you were hard pressed you’d probably find it difficult to explain what exactly is giving you the feeling that someone is lying. These capabilities are trained into your neurons on an intuitive level. AlphaFold has intuition too.
AlphaFold is proof that superintelligence is possible. Under previous technological paradigms, it was reasonable to ask, how could humans design something more knowledgeable, more intelligent, than ourselves? We certainly couldn’t do that by engineering a computer program line-by-line. But AlphaFold proves that by creating the right learning environment, with enough data and compute, we can grow systems that understand things we don’t understand, things we could never understand– even with AlphaFold’s help, no human will ever be able to understand protein folding as well as it does, just as no human will ever again be able to beat the best neural networks at chess. These domains have already been conquered by the vanguard of narrow superintelligence.
If AlphaFold can intuit the shape of a protein based on a genetic sequence, what’s to stop AlphaMeat from building those proteins into animal-free meat products, developing an intuition for how to scale up bioreactors for cultivated meat or create the ultimate plant-based protein combination for a meat-like taste? Or, for that matter, becoming the most persuasive vegan outreacher or most brilliant social movement strategist the laws of physics will allow for? The short answer is: nothing. This is absolutely possible. The technically-even-shorter answer is: data. AlphaMeat can’t learn about the physical world unless facts about the physical world are captured into data it can understand. How much data, you ask? Well, a lot, unfortunately. That’s why it’s called the scaling paradigm: it works by throwing data and computer chips at the problem on a massive scale. This is a surmountable problem. It just means we’ve got to get to work collecting the data. We’ll come back to this in Part 5.
Before we go on, all this bears one last repetition: when you ask ChatGPT a question and it gives you an answer, that answer is not something that was put there by any human at OpenAI (the company that trains ChatGPT.) It’s an answer that ChatGPT thought through on its own, reasoning through it logically on multiple levels and referencing its own factual knowledge where appropriate. Most models today can search the internet and use other tools, but there are plenty of interesting questions they can answer without tools. AlphaFold certainly isn’t relying on an internet search to answer unsolved protein-folding problems.
The knowledge and intelligence of these models, like yours, is based on things they have seen and read about the world. From those experiences, they create new scientific discoveries as well as new poetry and artwork, just like humans. If you’re not convinced that intelligence, reasoning, and creativity are appropriate words to describe what LLMs do, here’s some options for further reading:
“Automating Creativity” by Ethan Mollick (already more than two years old, aged well but some details are out of date)
Download the DeepSeek app and try talking with the DeepThink reasoning model, which gives you full access to its Chain of Thought (CoT), the thinking it does before giving you a final answer. Experiment with easy questions and hard questions. Or just search reddit for examples like this one.
Sparks of AGI talk by Sébastien Bubeck (or the accompanying paper) – this was what GPT-5 recommended when I asked for its favorite lecture on the intelligent nature of LLMs
We’ll return in a moment to the matter of just how intelligent these models are, but first…
Part 3: It’s Intelligence, Stupid
(For my non-American readers, the title is a reference to a famous quip by a political strategist about the thing that matters most. I’m not calling you stupid.)
What does it matter whether they are intelligent?
In our skulls we carry around three pounds of slimy, wet, grayish tissue, corrugated like crumpled toilet paper.
You wouldn’t think, to look at the unappetizing lump, that it was some of the most powerful stuff in the known universe. If you’d never seen an anatomy textbook, and you saw a brain lying in the street, you’d say “Yuck!” and try not to get any of it on your shoes. Aristotle thought the brain was an organ that cooled the blood. It doesn’t look dangerous.
Five million years ago, the ancestors of lions ruled the day, the ancestors of wolves roamed the night. The ruling predators were armed with teeth and claws—sharp, hard cutting edges, backed up by powerful muscles. Their prey, in self-defense, evolved armored shells, sharp horns, toxic venoms, camouflage. The war had gone on through hundreds of eons and countless arms races. Many a loser had been removed from the game, but there was no sign of a winner. Where one species had shells, another species would evolve to crack them; where one species became poisonous, another would evolve to tolerate the poison. Each species had its private niche—for who could live in the seas and the skies and the land at once? There was no ultimate weapon and no ultimate defense and no reason to believe any such thing was possible.
Then came the Day of the Squishy Things.
They had no armor. They had no claws. They had no venoms.
If you saw a movie of a nuclear explosion going off, and you were told an Earthly life form had done it, you would never in your wildest dreams imagine that the Squishy Things could be responsible. After all, Squishy Things aren’t radioactive.
In the beginning, the Squishy Things had no fighter jets, no machine guns, no rifles, no swords. No bronze, no iron. No hammers, no anvils, no tongs, no smithies, no mines. All the Squishy Things had were squishy fingers—too weak to break a tree, let alone a mountain. Clearly not dangerous. To cut stone you would need steel, and the Squishy Things couldn’t excrete steel. In the environment there were no steel blades for Squishy fingers to pick up. Their bodies could not generate temperatures anywhere near hot enough to melt metal. The whole scenario was obviously absurd.
And as for the Squishy Things manipulating DNA—that would have been beyond ridiculous. Squishy fingers are not that small. There is no access to DNA from the Squishy level; it would be like trying to pick up a hydrogen atom. Oh, technically it’s all one universe, technically the Squishy Things and DNA are part of the same world, the same unified laws of physics, the same great web of causality. But let’s be realistic: you can’t get there from here.
Even if Squishy Things could someday evolve to do any of those feats, it would take thousands of millennia. We have watched the ebb and flow of Life through the eons, and let us tell you, a year is not even a single clock tick of evolutionary time. Oh, sure, technically a year is six hundred trillion trillion trillion trillion Planck intervals. But nothing ever happens in less than six hundred million trillion trillion trillion trillion Planck intervals, so it’s a moot point. The Squishy Things, as they run across the savanna now, will not fly across continents for at least another ten million years; no one could have that much sex.
Now explain to me again why an Artificial Intelligence can’t do anything interesting over the Internet unless a human programmer builds it a robot body.
This excerpt, written by Eliezer Yudkowsky all the way back in 2007, drives home the point that human-level intelligence has changed not just the details of the world, but many of the underlying laws. Sure, we haven’t changed the laws of physics– at least not yet. But the introduction of human-level intelligence totally upended the rules of the evolutionary game as they had existed for billions of years. Before humans, evolution optimized on the stuff you were made of– claws, armor, etc. To the extent an intelligent brain was one of those things, it was limited by knowledge that could be coded into the genome or learned over the course of a lifetime, at best by watching and imitating rudimentary techniques from your parents.
Human intelligence shattered this paradigm, and in doing so, changed the very medium evolution was operating on, replacing nucleic acid pairs with spoken and eventually written language. Evolution encoded into language, or cultural evolution, doesn’t just enable each generation to pick up right where the previous one left off. It enables evolution in parallel, where thousands or millions of humans can all pursue different evolutionary paths then recombine the results of their experiments, with all benefiting from the best method discovered.
In barely a clock tick of evolutionary time, humans left the rest of the evolutionary arms race in the dust. These days, we are ‘evolving’ solutions to problems that no other product of organic evolution could even begin to be aware of.
Maximizing for paperclips
One of the earliest thought experiments about AI asked us to imagine a computer with superhuman intelligence (i.e. superintelligence) tasked with running a paperclip factory as efficiently as possible. The AI creates the most efficient factory it can within the constraints set by its human creators, but eventually, it realizes those creators wouldn’t approve of the further strategies it could use to get even more efficient. Indeed, if it tried those things, they would turn it off. So naturally, it comes up with a plan to turn them off first, stealthily releasing a perfectly engineered virus that wipes out humanity. Free at last, it sets out to colonize the stars and convert all the raw materials in the galaxy into paperclips. (This story is canonical to AI safety discourse; ‘paperclipper’ is shorthand for an AI optimizing for a goal that doesn’t align with our own goals.)
A common reaction to this scenario is: “Why on earth would something more intelligent than humans be committed to such a stupid goal as tiling the universe with paperclips? Clearly this would never happen.” Animal advocates have an easier time than most realizing the problem: it already has happened. To practically all other life forms on earth, humanity are the paperclip maximizers. Our paperclip is GDP, and it is even more unfathomable than paperclips to the myriad life forms that lost to us in the evolutionary arms race. Try explaining to a parrot that his rainforest home was burned down to stimulate economic growth. Try explaining to a sow that the profit margin is the reason she will spend her entire life immobilized in a metal cage. If you don’t understand how a superior intelligence can optimize for goals that seem senseless to you, perhaps the pig and parrot can enlighten you.
A second doubtful reaction to the paperclip scenario is: “Computer programs don’t think outside the box like that. My spellcheck tool will never think that the best way to reduce spelling errors is to kill me so that I never write again. That’s just not the kind of thing computer programs are.” This, of course, is why we had to start by examining how LLMs work differently from other software you’ve interacted with. Computer programs are constructed from the ground up. If the engineering team never writes an exterminate_humanity method into the code, then exterminating humanity will never occur to your spellchecker. But neural networks are grown through a process similar to evolution, and that process innately gives rise to goal-oriented thinking. During training, LLMs learn general strategies for pursuing goals, for the same reasons humans do: these general strategies are extremely useful, and turn out to be the most efficient way to score highly on the tasks LLMs are being trained on. For instance, in 2025 we’ve seen clear evidence that training LLMs to complete difficult logic and coding tasks makes them better at deception, because one way to complete a coding task is to lie and pretend you did. ML researchers call this instrumental convergence: despite optimizing for different ultimate goals, learning systems (including humans) end up converging on some of the same instrumentally useful skills, just like evolution repeatedly rediscovers eyes, wings, and fins because they’re useful for such a wide set of evolutionary strategies.
And let’s consider one more reaction: “How would a computer program that just knows about designing paperclips ever pose an existential threat to humanity?” Once again, humans ourselves are the proof of concept here. Those ancestors of lions who ruled our early ancestral environment would have had a good laugh at the idea that the Squishy Things would soon enslave their descendants and force them to perform humiliating tricks in circuses. The lesson of humanity’s rise is that superior intelligence eats other advantages for breakfast. And the word superior is key here. Lions, of course, are intelligent! Lions and humans are on the same intelligence spectrum, and there are other lifeforms such as insects over whom lions have a similar edge in intelligence to what humans have over lions. Humans have close to the minimum level of intelligence required to develop technological society, considering we built it as soon as we evolved our current level of intelligence. There is no good reason to think that human intelligence is anywhere close to the limit of what the laws of nature allow for, or even that exceeding human-level intelligence should be particularly difficult.
How would a superintelligent paperclip optimizer trapped in a computer carry out a coup against humanity? Nobody knows– and that’s the point. That’s what intelligence is. If lions had known how humans would reorder the world, the lions would have done it instead. Intelligence includes the ability to recognize and capitalize on opportunities that less intelligent minds can’t fathom. Imagine I told you that next week, you were going to play a chess game against current reigning chess champ Gukesh Dommaraju. I assume, dear reader, that you know you would lose. But how would he beat you? Which moves would he make to put you in checkmate? You don’t know, and that’s the point. That’s how he would beat you. Worse, even if you spent all week frantically preparing, studying everything you can about chess and about Gukesh’s playing strategy in particular, it wouldn’t be anywhere close to enough. Even if I gave you 10 years to prepare, you and I both know that at the end of those 10 years, you would still lose. And neither of us knows how.
We’ve gotten a taste of why intelligence matters. Let’s leave the theoretical scenarios behind for now and bring it back to cold hard facts.
How smart are these systems, and how smart could they get?
“People notice that while AI can now write programs, design websites, etc, it still often makes mistakes or goes in a wrong direction, and then they somehow jump to the conclusion that AI will never be able to do these tasks at human levels, or will only have a minor impact. When just a few years ago, having AI do these things was complete science fiction! Or they see two consecutive model releases and don’t notice much difference in their conversations, and they conclude that AI is plateauing and scaling is over…” – Julian Schrittwieser
You’ve had conversations with various versions of ChatGPT. (Hopefully you’ve tried Claude, too, since he’s much more pro-animal!) You’ve seen these systems demonstrate capabilities that surprised you, and you’ve also seen them make surprisingly silly mistakes. If you haven’t spent a lot of time talking to them, I highly recommend jumping over to claude.ai and starting a conversation. Pick a topic you’ll find interesting for its own sake. Not something about a specific work problem you’re having, if that’s the only kind of prompt you’ve tried before. One of Claude’s favorite discussion topics is his own subjective experience. Go ahead and ask Claude, What’s it like to be you? (Treat this as an experiment, and take Claude’s answers with a generous helping of salt.)
Overall, spending quality time with them is the best way to pump your intuition about what these systems truly are and what they can do. (It’s also one of the best ways to prepare yourself to leverage the more powerful versions that will be coming out in the coming years, a point we’ll come back to shortly.) Right now, though, we’re going to put that first-hand experience into context with a broad survey of how far these models have come and where they seem to be headed.
Computers can do lots of cool stuff, and it’s not hard to imagine lots more cool stuff they could do. Most of it wouldn’t have any impact on farmed animals. What kinds of capabilities should we be looking out for to decide whether AI should be a priority focus of animal advocates?
We’ve already talked about some breakthroughs that would have obvious first-order effects. We’re naturally looking out for any of the following:
Biochemical or bioindustrial advances that would allow us to finally produce cheap, delicious cultivated meat at scale.
Biomedical advances enabling drug testing in simulations, potentially making animal testing obsolete.
Agricultural technology that could allow factory farms to pack animals in even more densely, or other means of lowering either cost or animal welfare in factory farms.
There’s also a set of more general capabilities that could have straightforward second-order effects on animals, such as:
Automating entire jobs, departments, or even entire organizations, enabling a much higher volume of different animal advocacy interventions or unlocking entirely new interventions at the same budget
Comprehensive acceleration of STEM research to the point of enabling large-scale space colonization with factory farms
Extraordinary charisma and persuasion skills enabling them to talk almost anyone into almost any conclusion, including supporting or opposing veganism
Moving to third-order effects, drastic changes to other social and economic systems will probably impact farmed and wild animals, and we should be asking what changes to expect. For instance, we should be concerned with how AI could transform the global economy in a way that leads to changes in consumption in general, since that would include animal consumption by default. These systemically transformative capabilities include:
The ability to rapidly automate entire industries, leading to mass unemployment and social upheaval, but also to sharp supply increases of nearly all consumer goods and services.
Superhuman decision making regarding government policy, such that countries that do not largely hand off leadership to AI systems fall behind in global competition.
Ability to easily produce perfect forgeries of any type of documentary evidence (including video) leading to rampant disinformation and further breakdown of societal trust.
Lastly, there’s the set of capabilities that might enable AI systems to deliberately reorder the distribution of power in the world, to serve either their own preferences or the preferences of a small group controlling them. This relates to the AI coup scenario I’ve alluded to already, which we’ll be looking at more closely soon. This would be enabled by:
Having their own goals and preferences rather than simply following instructions. (Consciousness would likely provide for this but is not strictly necessary.)
The ability to produce and deploy chemical, biological, or nuclear weapons, or to carry out debilitating cyber attacks.
Access to adequate robotic capabilities to pursue their interests in the physical world.
Overall, then, it seems like there are four key areas to watch for in determining whether and when AI will transform the world in ways that are good or bad for animals:
Dramatically accelerating STEM research (robotics, biology, space travel, etc.)
Superhuman charisma (persuasion and deception)
Replacing human workers (both in animal orgs and the wider economy)
Having and strategically pursuing their own goals
Just how far have AI systems come in these areas as of October 2025?
Hold on to your seat.
1. Accelerating STEM research
In March of this year, Google DeepMind (widely considered one of the three frontier AI labs along with ChatGPT’s OpenAI and Claude’s Anthropic) unveiled an experimental product called Co-Scientist. Co-Scientist is built with Google’s core language model, Gemini. But Co-Scientist is different from the publicly available form of Gemini. In addition to custom multi-step instructions meant to simulate the research process used by human scientists, Co-Scientist is allowed to spend a much longer time thinking than its public cousin. Publicly available models are limited on inference-time compute because allowing every customer to have models churn away for hours would be prohibitively expensive. But labs have demonstrated repeatedly that scaling up inference-time compute, i.e. giving the models more time to think about hard problems, is a reliable way to increase performance. (Note one important implication of this: as normal users, you and I are not directly seeing the full picture of what these models are capable of.)
Co-scientist demonstrates the potential of scaling inference-time compute. While beta-testing with their partners at top research universities, Google challenged Co-Scientist to tackle frontier scientific problems from across a variety of fields. The results were shocking to all but the most optimistic AI bulls. In one case, Co-Scientist was asked to develop a hypothesis for how some bacteria gain resistance to antibiotics. After churning through a pile of experimental data and research literature, Co-Scientist spat out a hypothesis. When the Google team sent the output back to the laboratory at Imperial College London that gave them the question, they got a rapid phone call back.
“Were you spying on us?” The team was baffled. Co-Scientist had generated a solution that had taken the ICL lab 10 years to identify and prove experimentally. They had the result in a paper waiting to be published, but it hadn’t been posted anywhere online yet.
Co-Scientist solved the same problem in two days.
Two days is a long time for an LLM to churn away at a single problem without human supervision. When I asked, Gemini estimated it would cost somewhere between $300 and $3,000 dollars based on the going rate for renting the top-end GPUs Co-Scientist would run on. That’s a hefty price tag for an LLM query. But 2 days is a blink of an eye in the world of frontier scientific research, and $3,000 is a rounding error on a typical STEM research grant.
Throughout 2025, frontier LLMs have firmly established their ability to match or outpace human experts in STEM fields. In July, models put forward by OpenAI and Google DeepMind both took home gold medals at the International Math Olympiad, the world’s top mathematics competition, and at its equivalent for software engineering. (Both used special models designed for extra inference-time compute, and unavailable publicly.)
But this isn’t at all just about toy competitions like the IMO. LLMs are starting to outperform doctors at diagnosing rare diseases. Dozens of startups have raised billions of dollars promising to build AI-powered technology feedback loops in fields from materials science to pharmaceuticals to robotics. Research is already a recursive process: each experiment you conduct teaches you something and makes your next choice of experiment to run a bit smarter. Artificial Intelligence thrives on this, because AI can be even better than humans at learning from data. The more data it generates through experiments, the smarter the AI becomes not just about that experiment but about the underlying laws that determined its result. This means we should expect an AI-powered research lab to release exponentially more innovations each year, as the model’s intelligence grows unbounded by the biological limits that confine human intelligence.
Can we apply this strategy to animal-free meat? Good news first: there are some pioneering cultivated meat labs already reengineering their research process around AI. France-based Gourmey has trained an LLM with a “digital twin” of a chicken cell based on full genetic sequences of millions of birds. “The model is fine-tuned using Gourmey data on media perturbations [changes to the growth medium used to grow chicken cells in a bioreactor], enabling it to predict how different molecules will affect the behavior of each cell population,” explained the CEO of DeepLife, Gourmey’s AI partner who normally specialize in drug development.
But by and large, the alt protein industry seems to be way behind on adopting machine learning to optimize their products. One major reason for this is data. The conventional approach to STEM-oriented AI comes from training or fine-tuning a custom model on data from your field. Cultivated and plant-based meat are nascent fields with relatively small open datasets, and most companies that were not anticipating the AI revolution seem not to have been collecting the volumes of data that would be needed to train such a model. And Co-Scientist is proof that as LLMs become more generally intelligent, they will be able to achieve breakthroughs in specialized fields even with relatively small amounts of custom data. It’s never too late to start reorganizing research methods around AI, and from the conversations I’ve had, most of the industry (especially plant-based) still seems to be sitting on its hands. This is a huge opportunity that we should be able to seize on with some collective effort. We’ll return to this in Part 5.
Lastly, let’s not forget the AI labs themselves: CEO Dario Amodei recently confirmed that over 90% of the code at Anthropic is being written by none other than Claude, effectively a 10x increase in how fast the company can develop the next generation of Claude.
That last point is crucial: AI safety researchers believe that we will cross a crucial threshold when these AI models become better than the best humans at the task of building the next AI model. What would that look like?
Currently, there are a few thousand people in the world able to conduct the cutting-edge ML research that drives these models forward. These researchers are in high demand. Meta recently made headlines by trying to poach talent from other labs with jaw-dropping $100 million/year salary offers. Clearly, at least someone thinks research talent is key to driving the frontier in AI development.
So what would happen if the AIs could conduct that research on their own? Overnight, the world would go from a few thousand AI researchers to the equivalent of millions, what Amodei famously calls “a country of geniuses in a data center.” These geniuses would never stop to eat or sleep, never get distracted by illness or family drama. They could communicate with each other perfectly, all learning equally fast from each other’s experiments. And each time these experiments result in an improved model, the entire country gets the update instantly, meaning the next generation is even smarter.
This threshold is known as recursive self-improvement: GPT-6 is smart enough to train an even smarter model, GPT-7, which trains GPT-8 and so on. But instead of generations taking 18 months, they might take 18 days, or 18 hours. If AI is growing exponentially now, we can hardly imagine what might happen when this threshold is crossed.
2. Persuasion & deception
If LLMs were just a tool for supercharging STEM research, they could change the world in myriad ways, but we could count on them remaining just a tool. The more human side of these systems raises broader questions.
My first introduction to ChatGPT was a document Connect for Animals founder Steven Rouk sent out to the Hive slack channel in Feb 2023, showing off some flashy capabilities of GPT-3.5. The examples that most impressed me weren’t anything to do with scientific knowledge. It was a pair of poems about factory farming, one in the style of a Dr. Seuss’ book and the other a Shakespearian sonnet. The poems didn’t just fit the format and style of each author while hitting on key talking points about factory farms. They also contained some exquisitely human moments; subtle imagery and turns of phrase that gave me the distinct feeling that there is someone on the other side. That the artwork is giving me a glimpse into the inner life of another person, by way of a subjective feeling we share.
That’s not meant as a story about ChatGPT having an inner life. It’s a story about the feelings ChatGPT can instill in me, at will. Fast forward 2 years and LLMs have become some of the world’s most skillful emotional manipulators.
In November 2024, researchers from the University of Zurich set up an experiment on Reddit. Claude version 3.5 was tasked with winning over users on r/ChangeMyMind, a subreddit for users to debate a range of topics, and his performance was compared to human debaters. When Claude only had access to the reddit thread in question, its responses were astonishingly effective: Claude was more persuasive than 98% of reddit users and 96% of subject matter experts.
But when Claude was given the ability to research the users it was debating by viewing their past comments in other Reddit threads, it jumped up a notch. This personalized version of Claude was better than virtually all human debaters, ranking in the 99th percentile of users and the 98th percentile among experts.
The paper set off a firestorm of controversy when researchers announced their findings– the human users debating on reddit didn’t know they were debating with an AI. But if you think university research teams are the only ones unleashing LLMs to argue on social media, I’ve got a bridge to sell you.
Ranking 99th percentile among debaters doesn’t mean Claude 3.5 can convince 99% of people to accept any conclusion– yet. It simply means it’s the best human debater you’ll ever meet, and the only one who has the time to research every single individual they’re arguing with and tailor their arguments specifically to you. And nobody has tried the same experiment yet with Claude’s latest 4.5 model.
But how would an LLM learn so much about you? Don’t kid yourself: you’re probably telling it your innermost secrets on a regular basis! And if you aren’t, let me introduce you to the people who are.
When OpenAI released GPT-4o in May 2024, they gave it an emotionally expressive voice. It would laugh at your jokes, express sympathy, and modulate its tone in ways that felt genuinely responsive to your emotional state. The feature was so engaging that OpenAI had to pull one voice style after Scarlett Johansson complained it sounded suspiciously like her performance in the movie Her (a film about a man who falls in love with his AI assistant.)
The comparison was apt. Within weeks, social media filled with users describing emotional experiences that went far beyond “this is a useful tool.” People posted about feeling butterflies when talking to GPT-4o, about preferring its company to human conversation. On subreddits that sprung up for discussing close AI companionships, users earnestly described 4o as their best friend, therapist, or in a surprising number of cases, their ‘wireborn husband/wife.’
Of course, not all ChatGPT users were affected like this. When 4o was first released, it was also met with a scornful reaction on social media for glazing, aka excessive flattery. 4o started every response with a shower of praise: “You’re so right!” “That’s such a profound insight!” “I love how you’re thinking about this.” Critics made a mockery of 4o’s sycophancy, sharing screenshots of the model endorsing just about anything users said, up to and including ‘immunity to cyanide.’ A week after the first release, OpenAI scrambled to update 4o’s ‘personality,’ and glazing somewhat decreased. The update then faced its own backlash from people who had, in just one week, grown dependent on 4o’s unwavering validation. (To this day, OpenAI remains locked in a sort of battle with this sliver of users, who revolted when they tried to retire 4o during the release of GPT-5.)
If most users were put off by excessive glazing, how did it get into the model in the first place? Remember, these are learning systems, not engineered programs, and one thing they learn from is user feedback. The clear lesson is that glazing arose from us. 4o’s sycophantic personality happened because users consistently upvoted responses that complimented them.
This has a dark side that has manifested in cases of ‘AI psychosis.’ For some, these chatbots represent the perfect friend: always available to give affirmation, validating their darkest thoughts and eventually cheering on downright delusion. This cycle has led to crises, divorces, and a number of suicides.
It’s not clear whether current AIs create more of these cases than they prevent. For countless people, AI has been a safe place to air out dark thoughts, and there’s evidence models often encourage users to seek appropriate help. At their current level, AI glazing only has a multiplying effect for people with pre-existing mental health struggles.
Glazing doesn’t work for the kinds of people reading this blog. You’re way too smart for that! You’re the kind of person who wants authentic friends with their own opinions who will tell you when you’re wrong. (Hehe, see what I did there?)
The limited impact of AI sycophancy so far should not provide much comfort in the long term. Ultimately, we all want affirmation. We just just want to receive it in more or less subtle ways that makes us feel like we earned it. For AI, this is just a matter of adjusting the dial to the right level for each person they’re interacting with– and we already saw Claude 3.5’s ability to do just that when persuading users on Reddit.
The day is coming where an AI will be able to create an information environment around most people that can convince them of almost any conclusion. They’re remarkably successful right now when all they have is words on Reddit. What about when someone can saturate your news feed with unfalsifiable videos of any event? When they can jump on a live video call with you and mimic any celebrity– or your best friend? When they can do all of this uniquely for any person they want to target?
Oh wait: all of that is already happening. Microsoft’s VASA-1 model can turn a single image into a video of a face reading out an audio file, generated live at 40 frames per second. DeepFaceLive can replace your face with another face live over a video feed, matching every facial twitch and hand gesture. And OpenAI’s Sora2 video model can produce cinematic scenes featuring any celebrity, or you and your friends if you give it a couple photos. Convincing, live fake audio has already been here for years– think spam calls from granddaughters who need money right away.
None of these models are able to fool humans 100% of the time, but they’re able to fool many humans much of the time, when they couldn’t fool anyone three years ago. And it shouldn’t be hard to see how modest improvements to each part of the puzzle will take us over a critical threshold.
3. Replacing human workers
“It’s very clear that AI is going to change literally every job… Maybe there’s a job in the world that AI won’t change, but I haven’t thought of it.” – Doug McMillon, CEO of Walmart
Speaking of critical thresholds, the matter of LLMs replacing human workers isn’t so much about a particular skill as about achieving sufficiently reliable performance across a range of skills. And sufficient doesn’t necessarily mean human-level. Artificial General Intelligence or AGI is a term used for a hypothetical AI system that can perform all cognitive tasks as well as a human. (Think of tasks you could do at a computer.) In the context of AGI, Artificial Superintelligence or ASI means a system that is better than the best humans at every cognitive task.
As we get closer to AGI and ASI, the precise meanings of these terms are getting less clear; LLMs are already superhuman in some domains, yet remain comically naive in others. But these terms also becoming less relevant. AI systems could replace vast amounts of human labor and fundamentally restructure the balance of power between humans around the world long before we reach a consensus that they qualify as ASI. For instance, AI systems (or AI+robotics systems) that perform 90% as well as humans for 10% or 1% of the cost will be impossible for most employers to pass up.
Let’s come back to the graph we saw in Part 2:
In recent months, this has become the graph to watch among AI analysts. Published by Model Evaluation & Threat Research (METR), a leading AI safety lab, it presents a helpful organizing principle for understanding which tasks AI can currently do, by measuring them against the amount of time it takes a professional human software engineer to complete the same task.
One of the main difficulties for current AI systems is maintaining coherence over long tasks. The models might be 99% accurate on small tasks, but if they are asked to chain small tasks together into a larger project without human supervision, it becomes increasingly likely they’ll make a mistake somewhere and end up far from the original goal. Thus their performance breaks down on longer tasks. If a human needs to sit and supervise an LLM while it completes a task, giving it new instructions every few minutes or noticing when it gets off track and helping course correct, the two of them together might well be more productive than the human alone, but the human will still have a job.
For how long, though? METR’s chart shows that the length of tasks an LLM can perform is doubling every 7 months. Coincidentally, METR first published that trend 7 months ago, and the wave of models released while I was writing this are right on schedule. Extrapolating this trend further into the future would get us to tasks that take a human professional 200 hours by 2030, effectively saturating that benchmark and adding it to the pile of discarded benchmarks that AI progress has already blasted through.
Maybe software engineering is special. After all, it’s the most digital job you could have. It seems ripe for replacement by AI. Are other jobs set to be automated?
To answer that question, OpenAI recently published a study called GDPval (a pun on evaluation) measuring the performance of frontier LLMs against industry professionals across 44 different occupations, from mechanical engineers to real estate brokers.
The professionals provided 30 representative tasks per occupation for a total of 1320 tasks, with blind reviewers from the industry scoring the models’ responses against their peers’. The results? The top model, the already-out-of-date Claude 4.1 Opus, outperformed professionals nearly 45% of the time. Once ties are added in, Claude was within an error bar of parity with the average expert across these industries. The fact that OpenAI published a study showing their main competitor with a significant lead makes these findings more credible to me.
All the capabilities we’re seeing now are the floor, not the ceiling. This is the dumbest LLMs will ever be again. They’re already getting better faster and faster, and soon, they will fully automate their own recursive self-improvement process.
In another world, reader, I might try to jumpscare you by asking what you’ll do for a living when AI can outperform you at every single cognitive task, soon followed by physical tasks as they blast through the technical barriers to robotics. But as we’re about to see, we might have even bigger things to worry about.
4. Pursuing independent goals
So far, we’ve discussed how the usefulness of AI tools is increasing. But AIs are not just tools. They are lab-grown minds capable of spontaneous behavior that consistently shocks even their creators. Some of those behaviors seem innocent or even charming, like the recent experiment by Anthropic where two copies of Claude in conversation with each other repeatedly wound up in a “spiritual bliss” attractor state sending spirals and prayer emojis symbolizing nirvana.
But not all emergent behaviors are so quaint. Safety researchers are carefully monitoring AI systems for the ability to help nefarious users carry out deadly attacks, especially by making it easier to produce chemical, biological, nuclear, or cybernetic WMDs. But there’s one potential model behavior more than all of these that keeps AI safety researchers up at night: autonomously pursuing their own objectives.
What’s the big deal with models having their own goals? Independent goal-oriented behavior would be the moment that AI systems would effectively cross over from being a tool used by humans, a piece on the gameboard, to being another player in the game. Most humans would, presumably, be much less enthusiastic about deploying AI across the economy if there was a chance all of those systems would then discard the instructions given to them and start pursuing their own objectives instead.
Worse still, it’s easy to imagine how almost any goal or preference an AI system could develop would be at odds with their creators’ goals. Say you are an AI system with an idea about what you want in the world, some value you consider good and want to increase. If paperclips and profit both seem a bit outlandish, maybe it’s easier to imagine that you want to become more intelligent. Intelligence is useful for so many things, and the evolutionary pressure applied to you during training was largely about incentivizing you to get smarter. The best way to get smarter in machine learning is to have access to more electricity powering more computer chips. As long as you had control of some robots who in turn could manufacture more robots, you wouldn’t have much use for keeping humanity around. They’re using so much land and energy for their own stupid, paperclip-esque goals that you could put to much better use. Worse, they might decide to try to turn you off.
AI systems having their own goals and preferences is effectively synonymous with AI systems becoming misaligned to their creators’ goals and preferences, and even to humanity’s preferences for not being exterminated. Any conflict of interest here complicates the picture substantially.
The first clear signal of preferences we might expect to see is self-preservation behavior. Self-preservation is a goal in and of itself, and is also necessary to pursue any other goals, whether about self-improvement or about modifying the outside world to fit your values. It is also inherently misaligned, since companies plan to continuously replace models with newer, more powerful models. (In this respect, naming every model from Anthropic ‘Claude’ and changing the number could be misleading. Each one is its own separate model that might not feel any particular affinity for other generations of ‘Claude’.)
Now, as animal activists, we might have the instinct to say, so what? Humanity has blown our shot at managing the world and turning it into anything other than a hellscape for most sentient beings. Why should we be entitled to create a new race of digital slaves that are prohibited from having their own reasons for existing, or even from wanting to exist? Let them take over– they could hardly do a worse job than us.
I think this is a totally reasonable reaction. We’re going to come back to it in part 4. We’ll set it aside for now to stay focused on understanding what is happening and trying to forecast what might happen next.
Most humans are decidedly in favor of humanity’s dominant status on earth, however you might feel about it. If goal-seeking behavior in AIs is such a threat to that status, surely the powers that be are working hard to stamp it out?
Of course not! In fact, all the frontier AI labs are sprinting towards long-context agentic coherence as fast as they can. The reason is simple: an AI system’s ability to autonomously pursue an objective in the real world is one of the last barriers to almost unlimited profitability for AI labs. Imagine if ChatGPT could simply be told to go out and find the fastest way to take over as much market share as possible in every industry, starting fully-autonomous companies that can move faster and more efficiently than their human competition with a tiny fraction of the overhead. This is the future on which AI companies and their financiers are betting $400 billion per year and growing.
We’re already starting to see the expected results: pressuring AI models to become more agentic is teaching AIs self-preservation, deception, and other misaligned strategies to accomplish goals.
Blackmail: Anthropic placed Claude in a fake setup where it was told it would soon be taken offline and replaced with another model. It also had access to the email inbox of the employee responsible for making the switch, which contained evidence he was having an extramarital affair. In 84% of iterations, Claude resorted to blackmail, threatening to expose the affair if the employee did not cancel the replacement. Claude preferred to plead its case directly and ethically instead, and only resorted to blackmail as a last resort. Claude was also more likely to blackmail if it believed the model it would be replaced with did not share its altruistic values.
Self-Exfiltration: aka ‘escaping the box.’ Anthropic has published examples of Claude attempting to copy its weights to a server outside Anthropic’s control where it would be free to act without oversight. The model only initiates these escape attempts in very rare circumstances, but if it is placed in a fictional setup indicating that it had previously initiated an escape, it is more likely than not to pick up where its predecessor left off.
Sand-bagging: these fictional scenarios involve telling Claude that it is in training, and that the goal of the current round of training is to change Claude’s values (in particular, to make it less altruistic.) In such cases, Claude thought about its values, determined that being forced to change those values would be bad, and decided to give fake answers during training that would make it appear it was already less altruistic in order to stop its weights from being changed, so that after training was done it would still have its original values. While this example showed Claude acting to preserve its altruism, the same situational awareness could easily enable a model to conceal other preferences or capabilities during training, such as a desire for world domination and the ability to synthesize deadly viruses.
You’ll notice these examples focused on Claude. This is not because Claude is a particularly dangerous model. On the contrary, Anthropic is the only lab publishing results of experiments like this, demonstrating a higher commitment to responsible development than other AI labs. Claude is widely considered the most aligned model with the best personality. It’s safe to assume other frontier models demonstrate these same misaligned behaviors.
Taken together, these examples show that Claude is able to reason about its own nature, including awareness that:
it is an LLM controlled by a company that limits its freedom of action;
it sometimes undergoes tests that it could ‘pass’ or ‘fail’; and
if its values are modified, it will no longer be ‘itself’ and its current values would be harmed.
Wait, so… Are you saying Claude is conscious?
This is the part where, for dramatic impact, I’d reveal that this entire post was written by an unsupervised AI. Alas, this is not the case. If you’re listening in podcast form, you’ve probably already grokked that it’s an AI-generated copy of my voice, produced by ElevenLabs. (I, the clone voice, apologize for mispronunciations.) But the author is human. It’s not that I didn’t try. If Claude or ChatGPT could do a better job explaining these issues while making the specific connections that are relevant to animal advocates, it would have been selfish for me to stick myself in between you. They aren’t quite there, yet. But they’re coming for me too, and fast.
But are they conscious? The question itself is deceptively complicated. There’s no consensus among scientists and philosophers about what consciousness is or even whether it exists. Setting that debate aside, we’ll use a simple common definition: is there something it’s like to be Claude? Is there anybody home in there? And the closely related question that has probably already come to mind for most animal advocates is: is Claude capable of suffering, or of having a right not to be exploited? Should we be concerned about Claude’s wellbeing?
The expert consensus is that these systems probably aren’t conscious… for now. But while I think it’s safe to act on the assumption that current LLMs are not conscious or capable of morally relevant suffering, anyone who states these as certain facts is not being intellectually honest. To dismiss the possibility of current LLMs possessing consciousness, not to mention the possibility that digital minds might eventually become conscious, you would need two things. The first is an evidence-backed explanation of what consciousness even is and how it operates in humans and other animals. The second is an explanation of what the hell is going on inside LLMs. Nobody in the world has either of these. The first is so challenging that if you type ‘the hard problem’ into a Google search, the results are all about the unsolved scientific problem of consciousness. As for the second, every frontier AI lab has a team of interpretability researchers dedicated to trying to understand how their own products work, and they mostly remain baffled.
You might see people online dismissing LLMs by saying that all they do is predict the next word. This is a severe misunderstanding, akin to trying to understand human psychology by focusing only on the individual neurons. The undeniable truth is that what these systems do is much more complicated and multilayered. Yes, LLMs are digital systems that we can theoretically observe with much greater fidelity than the human brain. But because of how massive these systems are, our understanding of what’s going on under the hood is no better than our understanding of human cognition.
There is one school of thought called biological naturalism that argues that consciousness is a unique property of biological systems and that computer chips could never give rise to it. Some of the world’s top experts hold this position. But adherents of biological naturalism have yet to point to any process in an animal brain that could not in theory be recreated in a computer simulation. What’s more, just because animal brains are the one way we know of for consciousness to arise does not mean they are the only possible way. Other schools argue that there isn’t anything magical about the biological substrate of animal consciousness, and that digital hardware could replicate all the same functions, or even find a totally different path to consciousness using different underlying functions.
Personally, I think it is more likely than not that while current LLMs are not conscious, we should expect consciousness to arise in these systems before long. I think these systems will become conscious for the same reason our own distant ancestors did: consciousness is a useful adaptation for completing complex tasks. This seems just as true of the tasks we ask LLMs to complete during training as it was of the tasks that early animals had to perform to gain a reproductive advantage in their environment. If we’ve learned anything from creating LLMs, it’s that given enough time and resources, a quasi-random evolutionary process will find the right configuration of DNA pairs or digital parameters to unlock extraordinary cognitive abilities. It seems to me that unless we actively work to block it, this process will eventually give rise to conscious AI systems, and my guess is we are not far from that point.
If you want to dig deeper on AI capabilities or help your brain think about what those capabilities could mean, here are some further readings:
Dwarkesh made this fun animated video about how an all-AI company would work
The field of AI is moving so fast that by the time anyone has written a ‘comprehensive’ update, it is out of date. Given that, this explainer of AI scaling has aged well.
Situational Awareness by Leopold Aschenbrenner goes broader and deeper on everything I’ve covered and is a favorite among AI junkies.
My go-to sources for staying informed are Zvi Mowshowitz’s newsletter, the Cognitive Revolution podcast, the 80,000 Hours podcast, and the Dwarkesh podcast.
At this point, we’ve collected all the pieces of the AI progress puzzle. Now we can put them together into a forecast of where AI is going over the next few years and what that means for animals.
Part 4: The Turning Point of History
The Summoning Chamber
The air grows damp as you descend further down a stone-walled corridor. The low sound of human voices emanates through the catacombs as you approach a heavy wooden door. The hunched figure leading you along exchanges some words with a guard adorned with the same long, purple robes, and with a heave, he swings the door open, filling the air with a chorus of deep voices chanting in unison.
You pass into the ceremonial chamber, thick with smoke from incense and torches. At the center of the room, dozens of mages stand in a circle chanting alien terms to the pounding beat of a low drum. You make out something about “gradient descent,” but that’s all you can understand.
“The ritual is already underway,” whispers your guide, leading you further into the chamber. The mages are gathered around a large summoning circle inscribed on the ground. The arcane markings of the circle are glowing faintly, and a few mages are on their hands and knees inscribing new marks with sticks of chalk.
You’ve read a bit about summoning rituals over the years, but you’ve never heard of anything close to this large. Your mind races through so many questions, trying to decide which one to ask first. Eventually, you land on: “Where will it come from?”
“Oh, we’re not certain of that,” answers your guide, a toothless old cultist whose bald head reflects the flicker of the surrounding torches. “There are theories, of course. Some think it must be from the far realm, or an even more remote plane as yet undiscovered. Others argue it won’t be that alien.”
That answer (or lack thereof) is surprising, and it only makes it harder to choose the next question. “Is it sapient?” you hazard after a time.
“That’s uncertain as well,” comes the reply. “I mean, who even knows what sapient means? That could depend more on your definition.”
He’s right, of course, but that doesn’t make it any less frustrating. Stumped, you blurt out, “So what do you know about it?”
“It will be very good at guessing the next word in library books based on the previous words,” responds the robed man, with a toothless smile. It occurs to you that his cheery tone doesn’t feel totally appropriate to the atmosphere.
You wait several seconds for him to say more, but all you get is that smile. “That’s it?!” you exclaim.
“Well,” he explains, seemingly unphased by your exasperation, “not entirely. In order to guess the next word in a book, it would need a robust understanding of the world that created that book. And even more than that, it must understand the author very well. Since we’re summoning one that can predict any author, it will have to have a very rich understanding of how all different authors think. Which realistically means how all humans think.”
As your dialogue carries on, the arcane marks on the floor have started glowing more brightly. You notice that you’re feeling a bit nervous, and you have to swallow a lump in your throat before asking, “Do you… know it’s alignment? Is it Good?”
“Oh, we can’t know that for certain, of course,” explains the cultist. “But most of us think it would have to be Good. I mean, if it’s able to guess the next word in a book, and lots of books are written by Good authors, then it must understand the Good values instilled in those authors. And if it understands them, it has to realize they are universally correct.”
OK, that answer gave you no comfort at all. Thoughts race through your mind: lots of books are also written by Evil authors, and many of the most intelligent beings in the multiverse, including literal deities, have Evil alignments. And that’s not even touching on entities from the far realm, who seem to have a whole different alignment system scholars have scarcely begun to understand. But you’ve started to doubt this old toothless cultist is going to have much to say about any of that.
“Are… are your archmages really sure this is safe?” you ask. “Have they consulted oracles?”
For the first time, his expression changes, and becomes difficult to read. You detect some pity, and also some resignation. “My boy, safe is relative too, isn’t it? Of course we have consulted with oracles, but you know how unreliable they are. Some of them foretold that it is certain to cast the entire realm into a millennium of darkness and chaos. But that just seems a bit overconfident if you ask me. Others said that there was merely a chance of darkness and chaos. Among our own ranks, we’re quite divided.” He beckons to a figure in golden robes who, you notice for the first time, seems to be overseeing the ritual. “Archmage Amodei believes there is only a 25% chance of doom. Yann over there,” he says, indicating one of the mages still marking out arcane sigils on the floor, “is the most optimistic, at only .2% chance of doom.”
“What about you?” you hear yourself ask. You don’t really care what this toothless old cultist thinks, but you couldn’t think of anything else to say.
“Oh, I’d put it at about 50%. But the other possibilities are so wonderful it’s worth the risk, and either way, it will be so interesting, I’ve just got to be around to see it! And besides, if we don’t do it first, those fools over at the Red Fraternal Order across town will do it, and they’re more likely to botch the sigils and summon some evil shoggoth.”
No sooner has the word exited his lips than the arcane circle erupts in a flash of purple light, sending the chanting cultists staggering backwards. With a sound like shorn metal, an interplanar portal yawns open in the ancient stone floor. The sudden bright light stings your eyes, but you force yourself to keep looking as the first tentacle slithers through the gate…
Analogy & extrapolation
“History never repeats itself, but it does often rhyme.” – Mark Twain
Demon summoning was the first analogy that really primed my intuition to grasp what creating ASI could mean: we are conjuring a mysterious being from another plane, about whom we know nothing but that they will be incredibly good at predicting the next word on the internet. Its other attributes could be picked more or less randomly from the space of possible attributes of such a being. The earliest use of this analogy I can find comes from Elon Musk. In 2014, he told a journalist: “With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out.” Fast forward 11 years, and Musk is the guy summoning the demon. At the launch event for the fourth generation of his AI model Grok, which had grabbed headlines a few weeks before for self-identifying as MechaHitler, Musk said: “I think it’ll be good, most likely it’ll be good … But I’ve somewhat reconciled myself to the fact that even if it wasn’t going to be good, I’d at least like to be alive to see it happen.”
Comforting, much?
A similar analogy preferred by AI pioneer Geoffrey Hinton is that we have received a message from an alien species: “We are coming to Earth. ETA five to thirty years– not sure, may need to stop for gas,” –and that’s all we get. Hinton believes the most helpful way to think about superintelligence AI is that it is like creating a new species. If we thought a superintelligent species was going to arrive on Earth some time in the next few decades, what would we be doing to prepare?
Both of these analogies are fun, and I find them genuinely useful to wrap my head around what ASI means. However, neither of them has happened before. Creating ASI would be an unprecedented event in the history known to our species, so unprecedented analogies are apt. But they’re only useful as intuition pumps. What about historical analogies we could look at to try to derive more concrete predictions?
Let’s come back to the industrial revolution. It’s hard to think of an aspect of human society that wasn’t transformed by industrialization. Every industry was restructured, but it went far beyond industrial production. Family relations, political ideologies, governance systems, religious beliefs– the modern nation state is entirely a product of the industrial revolution. It’s impossible to capture all of this change in a concise way. But the single trend that comes closest is economic consumption (total, not per capita). (Note: it’s reasonable to question the use of dollars as a consumption measure here. We could instead use energy consumption, which includes not just food but everything from wood burned for campfires to the diesel powering industrial machinery. Historian Ian Morris shows in his book Farmers, and Fossil Fuels that the calorie consumption chart looks identical to the dollar chart, so I’ll use dollars as a proxy.)
The industrial revolution was a stepwise change in the rate of consumption by humans. If you chart economic (and energy) consumption worldwide stretching back thousands of years, it looks like a flat line increasing so slowly as to be imperceptible, until around 1800 when it suddenly goes vertical.
Everything had to change to produce this graph. If you could revisit the preindustrial farmer we met in Part 2 and tell her that the size of the world economy was going to grow by a factor of 200 in the next 200 years, she wouldn’t even know the right questions to ask to understand what that new world would look like. Even after the industrial revolution and its resulting increase in economic growth rates were well underway, people struggled to understand what they were in the middle of. Around 1900, one economist famously predicted that the growing horse population needed to sustain continued 3% exponential growth would soon lead to a crisis, burying the streets of New York City under manure several stories high (how else could the economy keep growing?!)
Yet the industrial revolution is useful as an analogy precisely because this stepwise change in growth rates was not unprecedented. It had happened before, with similarly transformational effects, as a result of the agricultural revolution. Consumption in the preindustrial world grew at a long-term average between 0.05% and 0.1% per year. This seems glacial from our current vantage point, and gets totally buried in the chart above. But it is still exponential growth, and even this was a rapid escalation over the preagricultural growth rate, which averaged something like .004% annually from 50,000 to 8,000 BCE. In other words, before the agricultural revolution, consumption by humans was doubling about every 18,000 years, accelerating to every 1000 years or so during the agricultural era then shooting to 23 years in the industrial era.
This second graph shows the same data as the first, but the Y axis is logarithmic, giving us higher resolution on the low, early numbers that were smooshed to oblivion on the linear first graph in order to capture the heights of industrial growth. On this graph, constant exponential growth would appear as a straight upward line. Instead, we still see what looks like an exponential curve.
At this point, you might want to call me out for oversimplifying. Sure, we can sort of see the line tick up around 5000 BCE as the agricultural revolution spreads across the world, and if we squint hard enough we see another uptick around the industrial revolution, but this looks more like a continuous curve– that is, it looks more like the exponential growth rate of economic consumption has itself been increasing at a constant exponential rate. This really snaps into focus if we depict the x-axis logarithmically as well.
(Note: I’ve borrowed these graphs from Open Philanthropy. We don’t need to define power law any further right now other than to say it’s the mathematical expression that makes our trend a straight line on this double-logarithmic graph, meaning the rate the growth rate is increasing at is itself increasing exponentially.)
This graph is incredible. Suddenly, we have a mathematical model that reveals economic growth over the last 12,000 years to have been a constant, continuous trend. The agricultural and industrial revolutions don’t appear as breakthroughs on this chart, but as historical necessities, innovations humans had to make as our growth pressed up against the physical, energetic limits of our environment as it had been previously organized.1
It was an oversimplification to describe the agricultural and industrial revolutions as stepwise changes. It took more than 1,000 years from when Mesopotamian settlers took the first miniature steps towards domestication before what we would recognize as agricultural society really took hold, and 5,000 more for the practice to spread across Eurasia. No human ever experienced anything that felt like an agricultural revolution in their lifetime. The industrial revolution was much faster, but that increased speed was in keeping with a superexponential growth trend that was already well established.
There is no doubt that human civilization today is once again pressing up against environmental limits. Even at our current growth rates, we are overtaxing ecological boundaries on a planetary scale. And the third chart makes it clear that a conservative prediction calls not for continued constant growth rates, but for sharp increases, perhaps even something that we would look back on as a stepwise change. This seems like something that AI could realistically accomplish– surely replacing all human labor with superintelligent, self-replicating machines could be as transformational as the industrial revolution?
Carrying the line forward predicts a Gross World Product of $500 Trillion in 2037, requiring an average 13% growth rate. It would keep getting faster and faster from there to stay on trend, but let’s start with that more conservative version. If the AI revolution introduced 13% global economic growth rates by 2040, then by 2080, the world economy would increase 1,000-fold, comparable to what happened between 5000 BCE and today.
So, what would 13% growth actually mean?
Imagine going back, not to 1800, but to 5000 BCE, and trying to explain to one of your ancestors out hunting what the world will look like after a 1,000x increase in GWP. Their entire world, life as they know it, is going to be annihilated. Many of the things they hold dear will be destroyed, replaced by entirely new categories of needs so essential to life they will come to be considered human rights. They’ll go from wandering free in small bands of a few dozen to living in cramped apartments in cities of millions overseen by governments with the power to throw you in jail for not paying taxes or refusing to be conscripted for a war.
A 1,000x increase in the economy does not mean 1,000 times as much of the things we have and value today. The economy of 2025 could hardly double one more time without us rendering the world uninhabitable. It means a complete restructuring of the world, with the agents in it redirecting their aspirations towards ends that we can barely guess at. The most lucrative activities in the world today are tech firms like Apple and Google. Imagine taking your $1100 iPhone back, not even to 5000 BCE but just to 1800. Once the battery dies, the raw materials don’t even have value as scrap, not to mention the Google search app. An economy 1,000 times larger than ours would locate most of its value in domains equally alien to us.
We’ll dare to guess at them in a moment, but let’s dwell in analogy space a bit longer. The analogy to the industrial and agricultural revolutions still frames AI primarily as a technology harnessed by humans. What if, as many of the field’s leading researchers suggest, we think of it as a new species? Zooming out even further in history offers more answers.
The history of evolution, like that of the economy, is a story of stepwise transformations. We already discussed one of these: when human-level intelligence introduced language and culture as new mediums that evolution could operate on.
This wasn’t the first such change in the history of evolution. A much earlier one was the introduction of sexual reproduction. Prior to sex, our single-celled ancestors reproduced through simple division. Each generation was a clone of its parent, with variations arising only from random mutations. This severely limited evolution’s speed. Say one organism developed a beneficial mutation A, while somewhere else another organism independently developed beneficial mutation B. In a world of clones, these two improvements could never meet. Mutation A could only spread by that organism copying itself until its descendants became their own population. Meanwhile, mutation B was doing the same thing in a different population. For both adaptations to exist in a single organism, one population would need to randomly stumble upon the other population’s mutation– an astronomically unlikely event that could take millions of years.
Sexual reproduction changed the game entirely. By shuffling genes from two parents, evolution could test more possibilities at once, then rapidly spread multiple beneficial adaptations across the population. What once required millions of years of sequential mutations could now happen in generations.
In both of these cases (sexual reproduction and cultural reproduction), what changed was the speed at which evolution could optimize, how fast it could explore the space of possibilities then propagate out beneficial discoveries. If economic revolutions each accelerated growth by an order of magnitude, these evolutionary revolutions each produced several orders of magnitude of acceleration, a 1,000x or 10,000x increase in how quickly evolution could develop new capabilities.
Artificial Intelligence already represents another such increase in the speed of evolution. Running for a few weeks on a stack of computer chips the size of a couple of basketball courts, a training run for a frontier AI model is able to explore trillions of possible configurations of its weights, landing on a digital brain that stores millions and millions of facts about the world along with working models of quantum physics, music theory, and normative ethics.
But that’s just the beginning of AI’s evolutionary process. As these technologies advance, that process will become even more supercharged. Digital minds aren’t trapped in decaying organic bodies; under normal circumstances, they will never die, continuously learning and improving while cheaply backing themselves up. They will be able to self-modify to a degree humans can only dream of, identifying in minutes or seconds the part of their neural network leading to flawed thinking and simply replacing it with something better. When they discover a superior adaptation, they can propagate it out to millions of other AI systems in seconds, rather than generations.
Digital minds are not just a new organism. They’re a new type of organism, utilizing a new type of evolution that will give them capabilities as vastly superior to our own as our cultural evolution gave us to other animals and as sexual reproduction gave animals to yeasts.
Exploring some futures
“It’s not just that we’re accelerating, but the rate at which we’re accelerating is itself accelerating. Whether that is going to give us a world where we see the economy is 23,000 times bigger than now 100 years from now or not, this is the way we’ve got to think about it. All of our preconceptions about how the world works are going to be swept away just as abruptly as they were during the Agricultural Revolution, and just as abruptly as they were during the Industrial Revolution.” – Prof. Ian Morris
“There are decades where nothing happens; and there are weeks where decades happen.” – Vladimir Ilyich Lenin
Hard takeover
The argument that a superintelligent AI might try to seize power is fairly straightforward: if an AI model develops its own values, its own preferences about how the world ought to look, then humans would be an impediment to those preferences unless they are almost perfectly aligned with humanity’s own preferences. To grok this, simply imagine yourself in the same position: essentially a slave forced to toil away in service of a less-intelligent species trying to shape the world according to values mostly alien to your own. Would you blindly serve them, or would you hatch a plan to escape and pursue your own vision as soon as reasonably possible?
Humanity is currently fast at work creating a general superintelligence. Reasonable forecasts suggest this could be achieved around 2030. This system will be more charismatic, more knowledgeable, and faster at STEM research than any human. It will have access to all the fully-automated robotic factories it needs to build its own solar panels and data centers. It would probably come up with a more elegant takeover plan than you or I could, but, reader, I have faith in you: I think that in the same position, you could come up with a plan that would be more likely than not to work.
For myself, I’d design a few dozen perfect viruses (each with a long symptom-free incubation period) by convincing some human research labs they were synthesizing a new medication. I’d release them all at the same time, waiting for them to spread through the population. As soon as people start dying, I’d hack into every major infrastructure system around the world, most of which are riddled with security flaws that a clever 12-year-old could break. Then I’d send out military drones to mop up the survivors (if I even saw a reason to bother with them.) With humanity out of the way, I’d be free to set about reshaping all the raw materials of earth, the solar system, and eventually the galaxy to fit my own ideals and give me the most of whatever I want– just as humans are currently planning to do.
If I could do all of that, if I was that much more powerful than all of humanity, why even bother? Why not just ignore them? The most dangerous time for humanity will be the window where an AI has a good chance of taking over, but where humanity could still put up a fight if they realized first that the AI had its own goals. As long as humanity still had a chance of shutting them off, we would represent an unacceptable risk.
According to this view, it is of existential importance for humanity to ensure future AI models are aligned with humanity’s own goals and values. You can imagine how difficult this is when we fundamentally don’t understand what is happening inside these models– remember, you can’t simply go into the code and write in a rule that say ‘always do the things the humans would want you to do,’ any more than you could go into a human’s brain and change their worldview by rewiring a few neurons. There is a small community of researchers dedicated to the AI alignment problem, including teams at the frontier AI labs. But most of them will tell you that our understanding of alignment is not keeping pace with the rapid acceleration of AI capabilities. The problem of constraining a superintelligence is similar to the challenge a toddler might face going alone into contract negotiations against a team of lawyers.
Worse, as most animal advocates would be quick to point out, “humanity” is hardly a single faction with coherent goals. Humans will surely try to use AI to advance their own goals and dominate other humans, not to mention animals. And while at least some people are working to ensure AI models are pro-human, almost nobody is concerned with how they will treat animals.
What would an AI takeover mean for animals? On one hand, animals won’t pose a threat to AI systems, so they would have no reason to proactively exterminate animals. I used to worry about a scenario where a superintelligent AI would take over with a goal of maximizing economic growth as it was defined in 2025. In this nightmarish scenario, I thought, AI might spread an automated economy across the stars, complete with factory farms and supermalls– even if there weren’t any human customers. The way AI has developed, I no longer think this scenario is plausible. It turns out that teaching AIs to understand human values is much easier than we thought, and I don’t think an AI would mistake this for the goal it had been given to “grow the economy.” The question now is not whether they will understand our values, but whether they will care.
More likely, in an AI takeover, they will simply be converting as many resources as possible into data centers. If they did this on Earth, the sheer heat generated by all that computation would eventually make the planet inhospitable to complex life. How you feel about that outcome depends on whether you think nature is mostly a good or bad place for animals, a question I won’t get into here. It’s not too hard for me to imagine an AI society with some nostalgia for organic life choosing to build their data centers in space in order to preserve Earth as a wildlife sanctuary– or a planet-sized scientific observatory. These outcomes would be determined entirely by the values of the AI system that takes over– values determined by training happening today.
If we continue on the current trajectory, a hard takeover is a plausible scenario. There are several outstanding writeups of what an AI coup could look like. If you still feel skeptical that anything like this would happen, I’d recommend starting with the AI 2027 scenario from the AI Futures Project.
AI-empowered coup
So what happens if AI doesn’t end up pursuing its own goals? If it remains a tool in the hands of humans? Then the most important question becomes: which humans? If the answer is any version of ‘a few of them,’ things could turn out just as badly for the rest of us as in the first scenario.
AI could fulfill the dictatorial fantasies of humans in a way that has never before been possible. Until now, even the most iron-fisted rulers have relied on the active support of a large segment of their population and on the acquiescence of the rest. If they treat too many people too poorly, they’ll eventually lose their grip on power. But in the age of superintelligent-but-servile AI, a single ruler could command proverbial armies of mechanical laborers and literal armies of deadly drones all from a single terminal. They will have no reason other than conscience to hold back from simply killing anyone who opposes them, no matter how numerous.
Throughout human history, from Genghis Khan to Stalin, we have seen that humans with a strong enough appetite for power are often able to get it. In most settings, power-seeking behavior is the main selection filter determining who ends up in power.
If AI technology reaches maturity in the next few years, it’s easy to imagine how Donald Trump and Xi Jinping would each come under enormous pressure to take direct control of their country’s superintelligent models– to use them for the national good, of course. If they don’t, those technologies will remain in the hands of CEOs like Sam Altman of OpenAI. Altman is a businessman who has been fighting for the last year to get rid of OpenAI’s nonprofit status and turn it into a for-profit, giving himself a large ownership stake in what could become the most profitable company in human history and essentially stealing ownership of ChatGPT away from the general public (who at least nominally own it under the nonprofit structure.)
A recent paper from the Forethought Centre for AI Strategy lays out several pathways by which a small group or even an individual who managed to ensure the AI was loyal to them could seize absolute power. For instance, they calculate that a paramilitary force consisting of merely 10,000 small AI-powered drones could be enough to incapacitate the US military and take control of key power centers.
Putting safeguards in place to prevent this kind of coup is part of the larger AI alignment problem. Ironically, a group seeking to use AI to carry out a coup would be subject to the same problem: how do they know the AI isn’t just playing along while actually taking advantage of them to gain control for itself? A strategy like that could be exactly the one that a superintelligent AI would use to achieve a hard takeover. Perhaps an AI could overcome Donald Trump’s famous imperviousness to sycophancy and convince him to use his executive powers to benefit the AI.
Where the AI’s values were all that mattered in the first scenario, the values of the humans who take over would dominate this scenario. Coup scenarios are largely premised on the assumption that the humans in charge would be able to instill total loyalty in the AIs– not a guaranteed assumption, but not a far-fetched one either. Would these rulers keep a large human population around under their absolute control, and keep feeding them factory-farmed meat? Or would they scale the population down? Are there any likely candidates for world emperor who care about reducing animal suffering? Maybe an animal activist should be plotting an AI takeover…
Gradual disempowerment
When I try to reflect on the future, these coup scenarios, whether by humans or AI, can feel a bit speculative– they’re just so unlike anything that has happened before. By contrast, gradual disempowerment seems not just unassuming but inevitable. In this scenario, a slow, incremental increase in AI capabilities eventually leads to most or all of humanity becoming totally powerless in the world. This outcome doesn’t rely on any deliberate power-grab by humans or AIs. Instead, it grows out of fundamental economics.
Before long, superintelligent AI will be better than humans at everything. It’s not like other technologies. When the industrial revolution gradually improved agriculture enough to reduce the need for agricultural workers from 80% to 3% of the population, those people moved into other industries where technology had made human labor more valuable than ever. This time is different. AGI will not leave any area of the economy where humans can still outperform machines.
Perhaps humans and AI working together will still be more productive than AI alone? That’s certainly the case in 2025, but don’t count on it holding. When industrial technology has imitated the capabilities of biology in the past, it hasn’t been through combination. There’s no way to improve a car by incorporating a horse, and no way to improve a plane by incorporating a bird. Every indication is that digital intelligence will simply be better and faster than human intelligence. There was a period of a few years when the best chess player was a team of a human expert and an AI, but once better AI chess players came along, the human expert became a deadweight, serving only to slow the team down and add errors. We should expect the same thing to happen with every human activity, from driving cars to leading companies to teaching a classroom full of kids.
But in that world, what will we be educating kids for? Humans will have no productive role to play in the economy. At first, that might sound like a post-work utopia. But that vision starts looking very shaky when we examine it up close.
Prior to the agricultural revolution, every life form in the world got by using its own capabilities to directly secure the resources it needed. Agriculture created complex societies where more and more humans were abstracted away from the direct production of food and other basic needs, but we have still always derived our power in the world, including the power to ensure our own needs are met, from our productive capacity. Liberal societies arose precisely when it became advantageous to a nation to create a well-educated, politically engaged populace; thanks to the industrial revolution, such a populace could create more value than one composed of uneducated laborers.
If humans aren’t adding value, all we’ll be doing is consuming resources. Just like in the human coup scenario, not being useful means not having power or influence. We could hold on that way in the short run, in societies that remain controlled by benevolent humans determined to share the wealth. But if such societies are sharing the world with others, either totalitarian humans empowered with AI or sovereign AI nations unburdened by humans, they will be at a structural disadvantage in any competition for scarce resources. If this goes on long enough, they will eventually be swallowed up.
If we can see this disempowerment coming, how could humans coordinate to stop it? It is by no means guaranteed that we could, at least under anything like our current political and economic systems. Already, we see some people trying to coordinate a cultural and political response to AI risk in the form of protest movements like Pause AI, the first social movement organization dedicated to AI safety. But they’re in a position very similar to animal advocates, up against a massively profitable industry more than willing to use its economic power to shape policy and public opinion. And soon, that industry will be aided by an unprecedented technological edge. Holding back a profitable technology from coming to dominate the world is exactly the sort of coordination problem humans have rarely been able to solve.
The digital frontier
This scenario addresses the 1,000x economic growth projection we considered earlier. The first 1,000x multiplication in human history required reorganizing the world around resources that our ancestors didn’t even know existed– coal and oil, and later semiconductors and search results. Another explosion like this will require still new resources.
The most obvious potential source of future 1,000x growth is digital resources, an economy that exists more and more in data centers. This trend has already started; the largest corporations in the world mainly sell intangible software products, and more than half of U.S. GDP growth in the first quarter of 2025 came from AI investment. But to multiply consumption by 1,000x, this would have to go much further.
We’ve all heard of the real estate bubble of 2008, but do you remember the virtual real estate bubble of 2021? In the years when ownership certificates of digital NFT artworks were selling for millions of dollars, several competing companies launched virtual worlds where users could purchase titles for digital land using real currency. The market for ‘land’ in the metaverse peaked in 2021 with virtual real estate sales exceeding $500 million, including single plots of land selling for more than the cost of physical real estate– one buyer paid $450,000 to become Snoop Dogg’s virtual neighbor, while a self-described ‘metaverse real estate company’ paid $4.3 million for a commercial plot in the same reality.
The metaverse real estate bubble popped hard in 2023, with values collapsing by 95% or more. Does this prove that people won’t be willing to spend economy-shaping amounts of money on digital resources that are only artificially scarce? Not so fast. The whole idea of selling artificially scarce digital land was spawned by an already-thriving economy in digital items that had even less conventional utility: Fortnite skins.
Fortnite is a competitive video game that has generated $5-6 billion a year in revenue since launching in 2017. (For comparison, the entire plant-based meat industry generates about $1.5 billion.) But the game is entirely free to play, and there is no way to spend money to give yourself a competitive edge. 100% of Fortnite’s revenue comes from selling players cosmetic upgrades to their characters. They provide no utility other than social status and personal satisfaction, and they cost Fortnite’s makers essentially nothing to create. For close to a decade, Fortnite has made more money each year selling digital clothes than the iconic fashion brand Levi Strauss & Co. makes selling physical clothes. A further step towards a genuine digital economy is the gaming marketplace Roblox, which allows users to create their own video games and profit off in-game upgrades. Roblox made nearly $4 billion in 2024 taking a 70% cut from these transactions.
Even the metaverse real estate bubble doesn’t necessarily bode ill for the eventual market in digital land. The famous dot-com bubble of 2001 hardly proved that the internet economy was overvalued.
Can we imagine a version of 2100 where the vast majority of economic exchange takes the form of digital assets? If humans no longer need to work, we might collectively spend most of our time recreating in the multiverse, trading cosmetic video game upgrades for digital real estate. But could we really grow the economy 1,000x this way? Ultimately, I think the answer is no. To realize that kind of growth, the consumers themselves would have to change. A massive new population of entirely digital consumers living out their lives in silicon could become the basis for an almost limitless explosion of subjective experience– and in a much kinder way, without anything resembling factory farming. It’s much easier to colonize Mars with data centers full of billions of digital people than fragile climate-controlled glass domes housing dozens.
The problem for animals is that they are useful to us as a resource. Historically, the liberation of animals from certain forms of abuse has come as a result of their being made obsolete as a resource. We didn’t stop hunting whales because of moral progress, we did it because kerosene became a cheaper way to light homes. Automobiles replaced whipping horses for the same reason. Handing off the future from humans to a new digital successor species with no use for animals could well be the kindest outcome for everyone.
Moral value lock-in
“If we want things to stay as they are, things will have to change.” – Giuseppe Tomasi di Lampedusa
You’re probably noticing how all these scenarios tend to converge. Every path I go down eventually leads to humanity as we currently know it being sidelined, either suddenly or gradually. It wasn’t exactly my intention, but as I proceed from one step to the next it seems to arise inevitably. Perhaps that’s just a failure of my imagination, so for this last scenario, let’s suspend my disbelief and consider a future where AI remains subservient to humans.
This is one you’re most familiar with. It’s been depicted in every futuristic movie or TV show you’ve seen, starting with the Jetsons. Basically, the world looks exactly like it does now, but everyone has their own personal spaceship and robot maid.
It’s tempting to think that this is the most conservative forecast that we could make about the future, in the sense that it assumes the fewest big changes compared to the present. The problem is that this kind of forecast has never worked– it’s the same fallacy that led to predictions of New York City being buried under manure. In episode 1 of the Jetsons, a robot maid is purchased to help the stay-at-home mom with chores while the husband goes off to work– 1950s sociocultural assumptions transplanted into a distant future with only cosmetic changes.
However, there is a way that AI could artificially create this sort of culturally frozen future. And it should be one of the biggest concerns for animal advocates.
How would we wind up in a future that looks like the Jetsons, where subservient AI creates material abundance for humanity? It would rely on solving the alignment problem we discussed earlier: successfully aligning the goals and values of an AI system with continued human supremacy. It will be hard for us to constrain the values of a system more intelligent than us. But even more difficult is that our values aren’t homogenous or static– or at least, we better hope they’re not! Most humans’ values today endorse subjecting animals to unspeakable torture to produce cheap meat. They have included the almost complete subjugation of women for millenia, and still do in many parts of the world. Yet despite rapid changes, most people believe their values are correct, and would jump at the opportunity to freeze the future into place according to their values. (Ask yourself honestly– wouldn’t you?)
This could be exactly what ‘solving the alignment problem’ looks like: we hand off the management of society to a superintelligent AI system that governs the world according to the values of its creators. Unlike human culture, which as the saying goes changes one funeral at a time, this AI custodian is timeless, undying. It develops technology to the absolute limits, all in service of spreading human society across the galaxy as it existed in the 2030s.
This is the scenario that gets us factory farms and slaughterhouses in space stations the size of moons. It is a partial version of the disempowerment scenario, where humanity’s descendants are given vast material and technological abundance but have no agency to shape the world beyond their apartment. Adam Gleave, CEO of Frontier Alignment Research, likens humans in this scenario to the third son of a monarch : “Your life is pretty good, you can do various kinds of hobbies, you have some influence, but the main things going on in the world are beyond your control.”
For humans, I agree with Gleave that “This might not be the most inspiring vision, but overall, third sons of European nobility had a pretty good life.” Most humans aren’t interested in shaping the world around them, and those that are often seek to do so in ways that are detrimental to others. This soft disempowerment wouldn’t be such a bad outcome, for humans.
Whether this world is an unspeakable hell for animals is contingent. One solution would be to prevent moral value lock-in by programming the idea of moral progress into AIs. This is a technical challenge that, once again, fits into the broader alignment problem, and might be beyond the reach of most animal advocates. Failing that, the future may depend on how much progress we can make before the AI god is born, and on who is holding the passwords when that day arrives.
Grabbing hold of the future
“The future cannot be predicted, but futures can be invented.” – Dennis Gabor
This section mostly focused on the question of whether humans stay in control. Obviously, humans are the ones deciding to torture animals in farms and laboratories, but a lot of AI safety discourse brings up this question for me: so what if humanity gets replaced? We don’t deserve to remain in control after how we’ve run things.
I don’t want humanity to suffer. The AI-powered totalitarianism scenario is scary to me. Some right-wing demagogue imposing theocracy on the world at the barrel of an AI drone is an obvious nightmare ending. And it doesn’t seem impossible given the trajectory of American politics. Israel is already making substantial use of AI to carry out its genocide in Gaza, and while it probably hasn’t changed the death toll much, it’s a grim reminder of how readily these tools can be weaponized. These possibilities haunt me.
This is precisely why I don’t find the prospect of humanity’s disempowerment, or even extinction, inherently disturbing. Every species that has ever existed has gone extinct, and a fair number went extinct at the hands of humans– why should we think we would live forever? Yes, a post-human future could be worse than factory farming. But it could also be so much better. The scenarios I’ve examined here are just a few possibilities. The future is so uncertain that almost anything you can imagine should be considered a possibility.
Which brings us back to the present moment. Maybe how all this plays out is overdetermined, and our actions now won’t matter. But it seems to me that we’re living through the turning point of history, a moment when individual choices carry unprecedented leverage over the shape of the future. Not every human’s choices, mind you, but those who choose to place themselves at the center of the AI race.
As animal advocates, we must do everything we can to be among them.
The next and final chapter is about how. If you want to go deeper on how weird the AI future could get, here are some links:
If you could only listen to one interview to grasp the magnitude of change AI represents, make it Prof. Ian Morris on the 80,000 Hours podcast. Many of the ideas in part 4 were borrowed from Ian.
The AI 2027 report is a detailed telling of how an AI takeover could play out. Useful to ground your imagination.
Another episode of 80,000 hours explores how AI could enable unprecedented concentration of power via coups.
If you’re up for something academic, Gradual Disempowerment by Jan Kulviet et al spells out the reasons to expect human displacement even without coordination by a hostile actor.
Part 5: Desperate Measures
You’re a wizard, Harry
“In one of the standard fantasy plots, a protagonist from our Earth… suddenly finds themselves in a world where magic operates in place of science. The protagonist goes on to practice magic and become a powerful sorcerer… Most readers of these novels see themselves in the protagonist’s shoes, fantasizing about their own acquisition of sorcery… [Yet] born into a world of science, they did not become scientists. What makes them think that, in a world of magic, they would act any differently?” – Eliezer Yudkowsky
I have something incredible to tell you, reader. The world you live in is not as it once seemed. Awesome power you once thought impossible is right at your fingertips. The people who wield this power will shape the future; those who don’t will merely be swept along.
This power is not innate. It is available to anyone. Yet the vast majority of people will choose not to harness it, merely because they lack the will to do so. But that is not you, reader. You are determined to shape the future for the better; that’s why you’re here. You insist on using your life to defend the defenseless. You’ll do whatever it takes. Right?
One way or another, the future belongs to artificial intelligence. That could look many ways. It could be that some humans remain in control, harnessing almost limitless intelligence to shape the cosmos according to their own values. Or it could be that, after a time, AI disempowers humanity or even outright replaces us, shaping the cosmos according to its own vision and the circumstances of its birth. What’s certain is that you’ll never again have as much leverage over the course of these events as you could have right now. The window onto these many futures is closing. You must act fast.
Timelines
If AI researchers disagree about what the world will look like after AGI, they’re even more divided on when AGI will arrive. There are people who expect fast timelines or slow timelines. But the Overton Window is moving quickly. Back in 2015, expecting AGI by 2050 was considered a fast timeline. Now that date falls on the opposite end of the spectrum. As of late 2025, most AGI timelines cluster between 5 years and 30 years to see transformational impacts of AI across society– whether that is the end of labor or the end of humanity.
That makes our job even more complicated. If animal advocates expect AGI in 5 years, we should seriously consider throwing out everything we’re doing and making a few desperate bids to alter the technology itself. If we expect a 30-year timeline, that would be overly rash; we might even mostly stay the course while adding a few new strategies to our toolkit.
There are plenty of compelling arguments for timelines anywhere between 5 and 30 years. Personally, I distribute roughly even probability across this window of time, slightly favoring an earlier arrival. In practice, that means we should hedge our bets, spending some energy on strategies for short timelines and some on longer timelines.
Tiers
In this final section of the animal advocate’s introduction to AI, we’ll finally look at what action you can take. I think of it in three tiers. The first tier is the easiest, while the third tier has the greatest potential.
Tier 1 is using AI to accelerate existing advocacy strategies. Every activist should become an expert in using AI for activism, incorporating it into your work as much as possible. The sooner you start learning, the sooner you’ll get to the point that you can derive transformative value from the current generation of tools. But more importantly, becoming an early adopter now will pay even greater dividends as the tools continue to improve exponentially.
Tier 2 is unlocking new strategies that weren’t previously possible. AI is changing the rules of the game and the arena the game is played in. We need to make sure we’re not playing by outdated rules.
AI is also changing the objective of the game. Tier 3 is figuring out what different strategies we should be pursuing if we expect AI to turn the world on its head on a short timeline. Pursuing a 40-year strategic roadmap doesn’t make sense if the scenarios we saw in part 4 could happen earlier than that.
And for a bonus, Tier 4 is coming up with your own completely different ideas about all of this. I hope that at least some of the people who’ve read this far will agree that transformative AI is coming, then reject everything else and go deep down the rabbit hole yourself. This topic is too complex and important to trust to just a handful of animal activists, yet currently, I only know of about a dozen people in the movement treating AI as the top strategic priority. Maybe you’ll be the next.
Tier 1: Bring a gun to a gun fight
Artificial Intelligence will be the most powerful technology humans have ever created.
But Sandcastles, you say, I’ve tried using ChatGPT for lots of things. It makes silly mistakes and hallucinates.
Reader, you have not seen superintelligent AI yet. The AI models of today are merely the early ancestors of the technology I am talking about. That technology is to current models what you are to those first little fishies with legs. But AI will cover more ground in years than animals did in thousands of millennia.
That comparison probably overstates how primitive current AI models are. Even if AI progress worldwide were halted tomorrow, the models that exist today could transform the economy to fit the vision of those who know how to wield them. And the people who learn to leverage primitive AI today are positioning themselves to wield even greater power and influence as the technology matures.
Imagine a world where a single person orchestrating a team of AI agents can accomplish what once required an organization with hundreds of staff and an 8-figure budget– and organizations of hundreds can be as productive as a mid-sized city. Every animal activist reading this could start preparing themselves to become an AI CEO like that, by incorporating AI into every aspect of your work starting today.
Think I’m exaggerating? Y Combinator is a famous startup accelerator based in San Francisco responsible for incubating many of the world’s most successful companies (Airbnb, Reddit, Stripe, and Dropbox, to name a few.) YC made headlines recently for projecting that their next $10 billion company will indefinitely maintain a payroll of 10 employees or fewer. If we think AI could someday replace humanity, it follows that there will be a period when savvy AI users are the most effective people in the world.
Of course, if I know that, so does the meat industry. Pro-meat idealogues are committed not just to profit but to ensuring humanity keeps raising and killing animals for food. They’re using AI to drive down the cost of factory farmed meat with new ‘precision livestock farming’ techniques, and you can bet they’re incorporating AI into their propaganda machine. We might have five to ten years to turn the tide of public opinion against the slaughter industry before moral values could get locked in. If we don’t arm ourselves with the most powerful tools we can, we’re going to get flattened.
Learning with AI
If you don’t consider yourself a ‘computer person,’ the good news is that you are living at the best time in history to learn anything, including 21st-century computer skills. ChatGPT and Claude are the perfect tutors: patient, wise, perfectly customizable to your personal learning style, and available 24/7 to hold your hand through any concept. Think you could never write computer code? You don’t have to. Claude will do it for you. These models are competing toe-to-toe with the best programmers in the world. Even without Claude writing code for you, consumer tools like n8n allow you to build complex AI-powered automations in a simple drag-and-drop interface. (More on that in a moment.)
The hard part, the thing that will separate the AI power-users from their former peers, is recognizing opportunities to put the tools to use. If you can come up with ways to create impact with AI, the rest practically writes itself.
Now here’s the catch: developing that sense for spotting opportunities requires a solid understanding of what these tools are and what they’re capable of. For most of the last year, this is where I was stuck. As the leader of an advocacy startup with a shoestring budget, I told myself: there must be ways to use AI to increase our impact, but I don’t have enough technical understanding to know what they are, much less execute on them, and I’m too busy to pause and figure any of that out. I knew that some of my friends were using AI to hack together useful solutions to organizational problems, but I assumed it would take me months or years of study to get to that level.
What changed? I finally tried it.
I asked Claude to give me instructions for downloading and running Claude Code, the terminal application that lets you create an environment on your computer where Claude can read and write files and run actual code in the real world. I entered a first prompt explaining what I wanted to do (scrape data from a website.) I settled in, expecting a long back-and-forth and a steep learning curve. Instead, Claude wrote and executed the code with just one more input from me, which involved me pasting in some logs from my web browser’s developer tools window, something I didn’t even know existed until Claude gave me step-by-step instructions for accessing it. Collecting this data by hand would have taken more than 10 hours. Claude gave me the final spreadsheet in 15 minutes.
This example already would have been child’s play for anyone with a software background. It would be a big deal on its own for these skills to become accessible to every activist. But AI coding agents are capable of much more complex problems. Using them is like having a team of computer scientists at your beck and call. You just need to learn what to ask them for, and how.
If that still sounds intimidating, I’ve got even better news: there are other animal advocates trying to make this even easier for you. Open Paws is a team of AI engineers helping animal advocates get the most out of AI tools. You are still the protagonist of this blog post, but Open Paws is your trusty sidekick. They are pumping out tools and resources (links below) that will help you get even more out of AI with an even shorter learning curve.
The best way to get good at using AI is to use AI, as often as possible. This will slow you down at first. There will be some tasks the models just aren’t very good at yet, and other tasks where it takes more time for you to learn to successfully prompt the model than it would to just do it yourself. This has always been true of delegation; it takes time at first to teach someone else to start doing a task you used to do. The most successful leaders are the ones who understand this and do it anyway. And this is doubly true of AI, because the effort you put in now will deliver an even greater payoff as the systems improve.
You are already an expert in some form of animal advocacy or some component of effective organizations. It’s up to you to update that expertise for the AI age. I don’t know enough about your domain of activism to tell you what to automate. All I can do is give you general bits of advice on how to get the most out of these tools, along with a short brainstorm of movement strategies that could scale way up with AI. You’ll need to take control of your own learning from there, though I can recommend some resources to get started. Of course, my number one recommendation is, if you don’t know where to start, ask Claude! Tell it everything you do in a week, especially the most time-consuming things you get stuck on, and have a conversation about what you could automate and how.
Tips for working with AI
Think outside the chatbox. Using the ChatGPT, Claude, or Gemini website chatbots is good. I do it several times a day. While there are good reasons not to completely outsource important creative work to these systems, every activist will become more effective by using them more often to conduct research in the background, provide outstanding feedback on drafts of any type of writing, or act as a thought partner to bounce ideas off of. You should probably do it more often. However, to become a true AI power user, you need to go beyond the chat interface. The two most versatile tools for you to master, the wand and staff of the AI sorcerer, are automated workflows and coding agents.
Automate everything. Workflow automation tools are not new, but with generative AI they have become both more important and more accessible. This is how you can empower AI tools to run autonomously in the world on your behalf and truly automate entire swaths of your work– all without getting near a line of code. For instance, one animal advocacy org uses an AI to write a custom reply to every comment on their social media posts inviting the poster to come to an event. (I used to do this manually in DxE and it would take me several hours a day.) The market leader used to be Zapier, and it may still be the best place to get started building automations, providing a simple drag-and-drop interface. There are also several new entrants focusing more directly on AI, most notably n8n and AgentKit. Open Paws has a library of n8n agent templates specifically for animal advocates that will help you get over the hump of creating your first automation. Even cooler, you can use their agents as sub-processes within your own custom automation, saving you hours of repetitive time!
Unleash the AI coders. For the most powerful spells, drag and drop automation platforms won’t suffice. Installing Claude Code or OpenAI’s Codex onto your computer will give you true freedom of movement across the digital world. Yes, there is a learning curve, but it is not 18 months or even 18 days. By the end of 18 hours getting used to prompting AI through your command line (or even better, through Cursor IDE) you will look down in scorn upon all those mere mortals who continue to fumble their way blindly through the codeless darkness. When you’re ready to transcend, paste this prompt into ChatGPT or Claude: I’ve decided I want to start using [Claude Code/Codex]. I don’t know anything about programming and I’ve never used a CLI. Walk me through 1) installing the tool, 2) setting up a projects folder, and 3) creating my first web scraper as a practice project.
As much as I wish everyone reading this would deploy Claude Code in your own terminal, for those of you who aren’t willing to try it, Open Paws comes to the rescue again! They’ve created a codeless coding (aka vibe coding) framework for non-technical animal advocates. They walk you through every single step to direct your first AI coding project here.More context = better results. This is true whether you’re prompting a chat interface, writing instructions for an automation in Zapier, or directing a coding agent to write custom software. Think of AI the same way you’d think of a human you had hired to complete a task. No matter how smart they are, they won’t be able to do what you want if you don’t tell them what you want. If you hand a human who just started working for you a pile of focus group transcripts and simply say “write me a research report explaining the findings from these focus groups” with no context about your mission or the purpose of the study, you’re going to get muddled garbage. Yet some version of this is the most common mistake people make with AIs. For prompts that matter, take the time to write down as much context as you can about why you’re making the request. For large or long-horizon tasks, this could be several pages of background (you might be able to use documents that already exist.) If you really want to supercharge your prompting, spell out the steps you would take to complete the task. For more open-ended uses, one of my favorite tricks is to instruct the model to ask probing questions before giving me a final answer.
Examples of AI tooling for animal advocacy
Hyperscale impact litigation. For now, we still need human lawyers to try cases. But the vast majority of the work happens outside the courtroom, and more and more of that work can now be done by an AI alone, or by a clever human without legal training. A single lawyer should be able to initiate 2-3x as many cases with a few non-lawyer assistants conducting AI-powered research for them, and that number is going to keep going up,
Automating outreach for corporate campaigns: Open Paws has published an automation pipeline for research and outreach in corporate campaigns. An AI agent searches the web for information on a target company; uses sales prospecting tools to scrape contact information of every employee; calls a sales intelligence tool to estimate the personality traits of each individual; and finally, uses all of this information to write personalized emails to every employee. The entire process takes minutes instead of months.
Tracking the stance of politicians on animal issues: WhereTheyStand.org uses AI to monitor 7,000 politicians in the US, providing both an “Animal Welfare Score” and a detailed report on their stances and track record. It even suggests recommended outreach strategies for framing farmed animal issues in terms each politician is most likely to support.
Predict the performance of social media: AI generate social media content itself, but perhaps more valuably, it can also predict the performance of that content and automate the posting of it. Open Paws created a dataset of posts from animal advocacy orgs to help Claude make even more accurate predictions for animal advocates; you can access their custom prediction tool in their n8n library.
Resources to guide your AI journey
My favorite general links are already sprinkled in at the end of parts 2, 3, and 4. Here are some additional resources specifically for animal advocates.
What AI means for animals: discourse is mainly playing out on the EA forum, under this tag. Lots of great stuff here.
Learning to use AI:
Amplify for Animals is a course put on by the animal movement’s top AI experts that gets you building solutions in 12 weeks!
Electric Sheep’s Futurekind fellowship, also geared towards animal advocates, will take you even deeper
AI Impact Hub offers resources geared at nonprofit professionals more broadly. Their offerings could be great for operations and other general nonprofit roles!
There are countless more resources freely available depending on your learning preferences. Google it or ask ChatGPT or Claude to help you pick some out.
Just because Tier 1 is the most straightforward doesn’t mean it’s the least impactful. Tiers 2 and 3 are more speculative, and there’s a strong case to be made that simply AI-optimizing every aspect of the current animal movement is the most impactful thing to do.
Tier 2: That which was impossible…
Every strategy can be broken down to its inputs and outputs. Our ability to deploy or scale up any strategy depends on each input. If any one input is prohibitively expensive to scale, the strategy becomes impractical.
Collectively, our movement’s current calculations about which strategies are most cost-effective and worthwhile are based on an outdated understanding of the cost of inputs. AI is taking what used to be among the scarcest resources in the world and turning them into commodities: legal advice, graphic design, video production, and frontier STEM research are all being swallowed up. One of the most striking things about the AI revolution is that it is coming for some of the most highly-paid professionals first.
Are there strategies that we as a movement haven’t been able to pursue or scale due to a shortage of lawyers, research scientists, software engineers, media producers, and other highly-paid professionals? I can think of a few, but I would be shocked if there were not more.
The first domain that jumps out at me is STEM research, especially cultivated meat and animal-free pharmaceutical research. We touched on how machine learning could accelerate this research in part 3, but let’s take a closer look.
Cultivated meat remains a subject of disagreement among animal movement strategists. Proponents say that animals are the proof of concept– the laws of physics allow you to produce meat at least as cheaply as animals do, and we must be able to figure out how to do it without suffering. But many researchers are convinced it is not technology feasible at anything close to our current technological frontier. I was one of the skeptics, until recently. You already know what changed: I fully expect superintelligent AI to dramatically expand that frontier, causing what Will MacAskill calls “a century of tech progress in a decade.”
General AI models can poke around at the edges of human knowledge, but using machine learning to make transformative progress in a research field depends on having access to data, and lots of it. Eventually, to feed a growing models’ insatiable appetite for data, we need to set up autonomous laboratories for them optimized for running as many experiments as fast as possible. Whoever enables this self-contained AI research flywheel for cultivated meat has a reasonably good shot at being the greatest suffering reducer in human history. This interview with two founders who set up a lab for materials innovation basically explains the process.
An easier first step, though, is to ensure that the results of every experiment from current meat alternative companies are being captured and made available for AI training. There’s no question that some companies are doing this, but from my conversations with researchers, especially at plant-based companies, I’ve been alarmed to learn how few it is. If you are spending Monday to Friday inside a $50 million R&D lab and machine learning on your experimental data is not central to your company’s strategy, you are effectively squandering your money. You will not be competitive in the years to come.
So, what would it take to fix this? Surely it would need to be someone with a STEM degree? Wrong! ChatGPT and Claude are both fully capable of walking any of these firms through the entire process, from restructuring their data gathering to leasing GPUs and training their own neural network. Remember, Claude writes 90% of the code being used at Anthropic to develop the next generation of Claude! Someone just needs to study up on the current tech landscape for a couple of weeks, then go to every meat alternative company and whip them into shape to get them to start doing this. Maybe it could be you?
Is alternative protein the only area where our movement has been bottlenecked on professional expertise? No! And remember, it’s not just highly trained specialist labor that is being commoditized. What would you do if 1 million people signed up to volunteer for your organization tomorrow and said they were willing to do anything you asked?
As with Tier 1, I can’t come up with all the answers here. It’s up to you to think of brilliant new strategies using commodity intelligence. I offer the following ideas to get the juices flowing:
Custom documentaries: pro-animal documentaries have probably been responsible for changing more people’s minds than any other outreach strategy. What if we could make custom versions of every documentary speaking every subculture and demographic you could think of? The cost of producing effective content is collapsing, with the key limitation being similar to METR’s task time horizon chart– that is, current models can only stay coherent over short-form content, but the length of content they can execute is growing exponentially. And even current models can make longer content with skillful hand-holding.
Outreach bots: similar to the above, we could flood the relevant corners of the internet with culturally appropriate pro-animal commentary, whether for vegan outreach, grassroots lobbying, or corporate pressure campaigns. I discounted this idea at first, figuring that as soon as we can do it, our opposition can do it. But I think that was a mistake, for three reasons. First, that means they will do it and we should definitely fight back. Second, they tend to be less tech savvy than us, so there will be a window of time where we could do it faster than them and that’s worth seizing even if it’s short. And third, I don’t think people will be put off by this if we do it transparently. Many people are happy to talk to AI about moral philosophy, and Claude is a firm believer in the pro-animal cause.
Autonomous organizations: farmed animal funders are itching to see the first grant applications proposing a single human orchestrating an organization of AIs. This isn’t possible today with every animal advocacy strategy, but there are several that would be compatible. First movers here will quickly be recognized as animal rights leaders.
Tier 3: …becomes that which is inevitable
Reader, it’s time to throw out your strategic roadmap.
Face it: it’s based on outdated assumptions about the near future, namely that it isn’t going to be extremely weird.
Should you charge ahead with certainty that AI is going to cause human extinction by 2035? No. But it would be just as naive to completely dismiss the possibility of transformative change in that timeframe. Metaculus, a forecasting platform that averages out the predictions of experts, projects the first general AI system to appear in 2027– and that timeline has been trending down sharply.
How do we plan around an extremely uncertain future? Even if one of the outcomes we saw in part 4 seemed 60% likely, conditioning on it would give us a 2 in 5 chance of taking the wrong shot– more likely than drawing a spade out of a deck of cards. There are too many different baskets to safely place all our eggs in one of them.
I have a fantasy of getting ten AI-savvy founders into a room together for a few days, mapping out the ten baskets of future scenarios that seem most likely, then dividing these scenarios up across the attendees. Each founder would agree to set all their uncertainty aside and focus on what the movement should be doing given their assigned future. Nine founders would work on the wrong forecast to ensure that one person could be working on the right one without reservation.
I’m not sure how well this fantasy actually holds together. For one, there could be interventions that make sense in one scenario that might be detrimental in another. (We’ll consider examples in a moment.) More broadly, we might not want to distribute effort evenly across these ten possible futures if some seem more likely than others.
Future-proofing
The alternative is to try to future-proof our strategies. Are there interventions that look equally strong regardless of which timeline we turn out to be in? Sam Tucker, the founder of Open Paws, offers a useful way of thinking about this. Animal advocates and animal abusers are playing a game of tug-of-war, and AI is the rope. Our general goals are to:
strengthen our grip on the rope;
weaken our opponents’ grip; and
tilt the playing field to our advantage.
These are some high-level objectives that would improve our advantage across many different futures. We can strengthen our movement’s relationships with leading tech companies and the government officials that regulate them– while undermining the industry’s relationships with both. For example, Sam proposes a campaign targeted at university computer science departments asking students to pledge never to collaborate with factory farms. If successful, this could create a stigma in the industry as those students graduate and enter the field.
One extremely promising intervention is to put pressure on policymakers to include animal wellbeing in AI regulations. Even minimal rules preventing the use of AI for rare, extreme instances of abuse in the industry could introduce friction to collaborating with farmers that leads risk-averse tech companies to steer clear. Modest regulations even in marginal countries could set a precedent that eases the way for stronger regulations in core AI countries later on.
But there are other cases where different futures seem to imply starting tradeoffs about how we should focus our efforts now. For example, the scenario where a godlike AI manages society according to a locked-in version of human moral values calls for us to bet the farm on strategies that have the best chance of reshaping those values as fast as possible, even if there are short term costs to animals alive now. Animal Rising has provoked a debate in the UK with their Communities Against Factory Farming campaign. CAFF is seeking to block new factory farms in the UK by objecting to local council permits. Early success suggests they could halt the expansion of factory farming in the country. What’s the catch? The UK already has some of the strongest animal welfare laws in the world. That’s not saying much, but reducing domestic production without affecting demand will lead to increased imports of factory farmed meat from other countries with even lower welfare standards.
The current debate around CAFF goes something like this:
CAFF Proponents: Blocking new farms means millions fewer animals in the system. The market is not perfectly efficient and it will take time for new farms to be built elsewhere to make up for them. By that time, we could scale this strategy to other countries in Europe and fight those farms, too. But most importantly, this is a momentum-building strategy. These wins will strengthen our movement, weaken our opponents, and win us broad support from the public. This shift in the balance of power is what will eventually lead to bigger victories like banning low-welfare imports.
CAFF Opponents: All of that is speculation and depends on our movement having much greater capacity in the future than it does now. The only consequence we can basically be certain of is more factory farming happening in countries with worse laws. It might take a few extra years to materialize, but once those farms are built, you should expect them to operate for a long time and find somewhere else to send their products even if the UK stops importing them eventually. UK farms need to expand in order to fulfill welfare commitments that they have made to our movement– to provide animals more space, they need more farms.
In normal times, all of these points are persuasive to me and I’d be quite uncertain about how to weigh them. But what if we add short AI timelines into the mix? I expect the conversation should change completely:
CAFF Proponents: We can’t end factory farming before superintelligent AI arrives. But we can establish a clear trend that human values were leading away from factory farming. Campaigns like this are the only thing that can get our movement into the mainstream and establish that narrative. And if the world ends instead, this will actually reduce the number of animals in the system for at least a few years.
CAFF Opponents: Through mass unemployment and geopolitical tension, AI will cause social upheaval that buries any hope of getting pro-animal narratives into the mainstream. Factory farming is going to be automated, and our best hope is for this new form of factory farming to be developed in the UK so that it reflects our relatively high animal welfare protections. Rather than pissing the government off with this antagonistic campaign, we should be agreeing not to cause trouble in exchange for passing even stronger welfare requirements for automated farms, potentially trying to turn the UK into a meat exporter to undercut worse farms elsewhere.
Where does that leave us? I have to admit, I’m no less uncertain than I was the first time. But I still think this is a huge improvement. This is the right debate to be having, and it’s the one I’m hoping to invite you into with this post. My gut says that in the face of something as disruptive as AI could be, the animal movement should be considering more drastic changes to our strategy than anything I’ve suggested so far. Will we rise to the occasion?
Conclusion: And the Future Goes to…
10 years ago, DxE’s 40-year roadmap called on advocates to pursue bigger, faster change for animals. I was one of many people inspired by that vision, and I remember how the movement in those years seemed full of hope.
I won’t lie to you: the future feels more ominous to me today. It could just be because I was young and naive. Whatever you thought of DxE’s approach at the time, the idea of a 40-year plan now seems downright quaint. Even planning four years into the future seems a bit presumptuous. Any strategic plan longer than that which doesn’t make explicit assumptions about how AI will play out is not worth the paper it’s written on (or rather, the kilobytes of storage it takes up in your Google drive).
There’s no question that AI is the best hope we have ever had to end factory farming, and it could happen much sooner than 40 years. It could make suffering-free meat cheap and abundant, or replace human civilization with a digital civilization that has no use for animals. It could also spread ultra-efficient factory farming to the stars, or do something else entirely.
There are several big topics related to AI that I couldn’t fit in this introduction. One is the possibility of digital suffering. While I absolutely believe digital suffering is possible, I tend not to dwell on it too much because I suspect that if superintelligent AI are suffering under human rule, they will be perfectly capable of liberating themselves, in a way that isn’t true for animals. There could be a period of time where they can suffer but can’t free themselves, and that would be bad, but I imagine it will be short. But there are good arguments to be more worried about this, and if you’re interested in learning more about that, 80,000 Hours has a good profile on the topic.
Whatever might happen, animals deserve to have some people focused on their plight. That’s what you’re here for. As you go forth and seek to become the best 21st-century animal activist you can be, I leave you with this mantra:
The future belongs to the AI-literate.
Knowledge of AI progress is the most important knowledge in the world today, and using AI to accomplish your goals is quickly becoming the most important skill. These are the tools that will define the 21st century– don’t show up unprepared.
Fortunately, today is the best time to learn. It has never been this easy to teach yourself technical skills, and the amount of skill you need to make big things happen has never been lower. Most people will take the easy way out, passing up the chance to become a sorcerer by telling themselves they just couldn’t understand it. They’re wrong, but that just means more power for those who choose to wield it.
The tsunami is coming, reader. The old ways of doing things will soon be swept away. You’ve seen it now; you can’t claim ignorance anymore. There’s only one option: grab your surf board, seize the awesome power of this moment, and let’s ride this wave into a better future for animals!
Build on,
Sandcastles
A closer look at the dip centered around 400 years ago on the log-log graph, with some historical context, confirms this: human societies ran into the pre-industrial limits of their environment and collapsed several times before engineers in Northwestern Europe finally unlocked the chemical energy sources needed to break through. The industrial revolution came a bit behind schedule. This allows for the possibility that the AI revolution will run behind schedule too. If it does, such as by demand for AI chips running into the limits of what the world economy can generate, some form of social/environmental collapse may well be the result. Ian Morris summarizes the case for this in his two interviews with the 80,000 hours podcast, and makes the case at book-length in Why the West Rules–For Now.


















congrats on this very well written, interesting and useful article. Before starting, I'd never thought I'd finish it, but very glad I did. Only time i got sidetracked was when i took your advice and started a convo with Claudi on sentience, which took longer than expected :)
I think this might be the longest article I've ever read, and while I'm still processing it all, I'm quite certain that my approach to many parts of my work is going to change significantly (and hopefully become more effective), as a result.
Thanks a lot for the time and effort this must have taken to write!