The Big Question Lurking Beneath the AI Debate
Is artificial intelligence normal?
1. The Fundamental Question
In April 2025, Arvind Narayanan and Sayash Kapoor published an essay entitled “AI as Normal Technology.” Narayanan and Kapoor, a professor and PH.D. candidate in computer science at Princeton, did not claim that artificial intelligence was boring or unimportant. Rather they argued that AI was a general-purpose technology in the lineage of electricity, the car, and the internet. To AI’s boosters and doomers—those who see AI as the end of work, the end of history, or the end of human life—they countered that AI will not be the end of anything. Its evolution and its effects are more likely to fit inside the grooves dug by previous generations of technology.
While many AI builders and commentators have argued that AI might displace millions of workers, or become a big economic bubble, or kickstart an arms race among governments to build a cyber Swiss Army Knife for hacking their enemies’ digital infrastructure, Narayanan and Kapoor calmly pointed out that, as a matter of technological history:
… it is normal for a new technology to create bubbles, or winner-take-all markets, or powerful monopolies that require government intervention, or all three (e.g., railroads, Standard Oil, and AT&T).
… it is normal for technology to displace some jobs and create others, producing short-run pain and long-run growth (e.g., the internal combustion engine and farm employment).
… it is normal for technology to go through an early period of safety chaos before regulations eventually catch up and impose order (e.g., US meatpacking and coal mining).
… and it is normal for technology to create an arms race between offense and defense—guns vs. armor, locks vs. lockpicks, viruses vs. antivirus software—that leads to a new equilibrium, without destroying the world.
The case for “AI as normal technology” seems to fit much of the evidence before us. Three and a half years after ChatGPT debuted, GDP growth is average, and unemployment is still under 5 percent. Even jobs that recently seemed vulnerable to automation, such as radiologists, are still seeing rising employment and wages. As François Chollet, a French AI researcher, has said, AI still “cannot operate without supervision,” which is why “there is still zero job from 2022 that can be performed end-to-end by AI, not even translator or customer support associate.”
The “normal” essay is simply one of the best and wisest pieces of writing I’ve seen about AI, which is high praise because that is an awfully crowded category.
But what if it’s wrong?
Taking the other side of the debate is a phalanx of technologists, philosophers, and writers who believe that AI is on a glide path toward superintelligence, a powerful system that will have unprecedented and transformative effects on society. As AI develops the ability to write its own code, this group believes, it will spark a cycle of recursive self-improvement, or RSI: a model that builds a better model, which builds a better model, whose capabilities and intelligence create a historically unique period of technological disruption. Anthropic cofounder Jack Clark has written that the RSI threshold has a 60 percent chance of arriving by 2028. (For comparison’s sake, that’s exactly the same odds that prediction markets give Democrats to win the next presidential election.)
If the superintelligence argument is correct, AI-by-AI could rapidly develop terrifying capabilities that strain our economy, our laws, and even our systems of governance. The geopolitical consequences would be enormous: Whatever country first crossed the self-improvement threshold might gain a durable advantage, not just in economic terms, but in global power. It would race ahead of their adversaries, powered by a force capable of improving itself in a way that has no precedent in history. Cars changed the world, after all. But they did not transform themselves into fighter jets and coronaviruses.
2. The Debate Behind the Debate
The debate over whether AI is “normal” is so much more than a war over a word.
I think that the most urgent discussions about AI policy happening today are fundamentally disagreements about how people think about the normality of AI. In fact, I think that understanding the debate over the concept of normal is essential to understanding why so many smart people bitterly disagree with each other about this topic.


