Derek Thompson

Derek Thompson

The Doomsday Scenario for AI and Jobs

What are the strongest cases for it and against it?

Derek Thompson's avatar
Derek Thompson
Feb 13, 2026
∙ Paid
a room filled with lots of desks and computers
Photo by Igor Omilaev on Unsplash

I cover a lot of topics in this newsletter and in my podcast: Inflation, GLP-1s, politics, loneliness. But the biggest divide in my audience — and the biggest divide among the people I read and listen to and trust — is on the subject of artificial intelligence. The news and discourse space as I see it often seems divided between outrageous extremes: “This technology is billionaire-hyped vaporware” vs “This technology is 12 months away from automating all white-collar tasks or destroying the world.”

These aren’t just extremes I read in the news. I see them in my own working life. Among East Coast journalists, I’m mildly concerned that I’m developing the reputation of being a mindless AI booster, on account of my reporting that the technology is already proving economically and socially significant. But several months ago in San Francisco, I got (somewhat playfully) yelled at by several AI builders and investors at an event for my article suggesting that AI might be an industrial or financial bubble.1

So, I’ve spent a lot of time this week thinking about how the AI discourse has become so fractured. At the highest level, I think the AI discourse gap is downstream of a cultural gap between the Bay Area, where the frontier labs are based, and the rest of the country, which has developed a skeptical attitude toward the promises of Silicon Valley. While many technologists in San Francisco regard their zip code as the world’s greatest fount of technological progress, much of the country—perhaps, especially journalists—regards Silicon Valley as a den of plutocratic parasites whose work deserves our most profound distrust and disgust.

Beneath this cultural difference, there is a deeper substantive divide over AI. It’s not one disagreement. It’s really more like four distinct divides.

THE FOUR GREAT DIVIDES OF AI

  1. The first question that I’ve seen divide people is: Is AI useful—economically, professionally, or socially? I have brilliant friends, especially in software programming, who use this technology every day and say it’s transformed their work—and I think they’re right. But also, in the last week, I’ve spoken to close friends in other industries, such as television news and marketing, who have tried to use AI tools repeatedly and insist that they consistently underwhelm. The weird thing is: I think these folks might be right, too. AI is a unique technology. It is not like a light bulb that provides the same wattage to all users. Depending on your job, and the AI model you’re using, and the quality of your prompts, and a thousand other factors, AI is like a light bulb that offers some people a million watts and some people utter darkness. Some tasks are especially amenable to this generation of AI tools, especially those involving data or software, a certain amount of schlep, and a well-structured prompt. But many people have jobs whose tasks might not yet be “AI-shaped.”

  2. The second question that I see creating fissures in the AI discourse is: Can AI think? That is, are these tools engaging in something like human thought, which combines memory, sense, prediction, and taste, or are they blunt instruments for synthesizing average work across several domains, producing average data analysis, average student essays, and average art? I want to pause here to point out something I think is important. I’ve seen many people suggest that AI can’t “think” and therefore it isn’t useful. But these are separate questions. AI can help a scientist draft a paper, or a bibliography, even if it doesn’t meet our philosophical or neurological definition of thinking. It can be useful without being technically thoughtful.

  3. The third great divide that I’ve observed is one that I’ve directly participated in: Separate from the question of whether AI is useful, and whether AI can think, is another question: Is AI a bubble? This is principally a question about the speed and timing of the technology’s adoption and its revenue growth. The hyperscalers and frontier labs are spending hundreds of billions of dollars training and running artificial intelligence. If they don’t see ferocious revenue growth from AI in the next few years, a lot of companies, especially those that take on debt, are going to find their current position untenable. They’ll either face a markdown on valuation, a layoff, or something more catastrophic. Once again, you can believe that AI tools like Claude Code do economically meaningful work and believe therefore that AI isn’t a bubble; or you can believe that AI does significant work but it’s still a bubble, because there’s no way these companies make back the money on time. In fact, this was basically my position for much of last year.

  4. The fourth and final great divide might be the widest. And it’s hard to capture succinctly so I hope you’ll excuse me going a bit broad here: Separate from the questions of is AI useful, or thinking, or over-leveraged, is a question that’s something like: Is AI good or bad? On one end of this spectrum, you’ve got the venture capitalist Marc Andreessen proclaiming that AI “will save the world.” On the other end of this spectrum you have the rationalist writer Eliezer Yudkowsky arguing that if anybody builds superintelligent AI, quote “everyone dies.” And there is a lot of real estate between those positions. Maybe you think AI won’t usher in the end of the human species, but it might make the most beautiful things in life — art, movies, human relationships — more slop-filled and shitty.

So, this is the landscape of the AI debate as I see it. What seems on the surface to be one debate between pro- and anti-camps is really several different debates that are becoming conflated and mushed together. Is AI useful? Can it think? Is it an economic bubble? Is it good for us or bad for us? These are separate questions. And our ability to bring wisdom to this topic depends on our ability to see that separateness.

In the spirit of trying to be specific about AI, today’s article is about a very specific question: What will AI do to jobs? In his Atlantic cover story this month, author Josh Tyrangiel wrote that the people who build AI have spent much of the last few years predicting dire effects it will have on the economy. In May 2025, Dario Amodei, the CEO of the AI company Anthropic, said that AI could “wipe out half of all entry-level white-collar jobs.” Last month, new tools like Claude Code and Codex were unveiled that sent a shiver through the tech world, as many top coders claimed that these tools could take over enormous chunks of their work forever. Software stocks plunged, possibly for related reasons. But when you look up from these dire predictions and stock gyrations to study the labor market of the present, it’s hard to see any effect of AI at all.

So how do we balance these pieces of evidence: the audacious predictions of tech CEOs, the enthusiasm for tools that seem to automate certain tasks, and the current calm of the labor market? Whatever it adds up to, Josh says, “America Isn’t Ready.” The following is a transcript of a conversation from my podcast Plain English. It has been edited for clarity, brevity, and the goal of making both speakers sound approximately 3 percent more articulate than one typically musters in a live interview2.


WHY THE AI DISCOURSE FEELS SO BROKEN

Derek Thompson: Two groups are coming into this episode from very different places. Group 1 says this technology is going to have a massive impact on the economy, on jobs, on the future of productivity, maybe even on our own sense of who we are. Group 2 insists that AI is still basically vaporware. It’s billionaire-pumped nonsense that hallucinates and doesn’t help anybody do anything. We’re going to spend a lot of time talking to Group 1. Before we do that, I would actually love you to address Group 2 directly. Why are they skeptical? And why are they wrong?

Josh Tyrangiel: I think a lot of that has to do with the context into which AI has arrived. We are not short on existential risk in our lives right now: political risk, climate risk, nationhood risk. Everybody is feeling risk from something. We came out of a pandemic. Right at the end of that, [ChatGPT] shows up. I think a lot of people looked at it, and at the hairball of motives behind the people who were creating it, and the massive investment and personal wealth that may come from it, after feeling victimized by 15 years of social media bullshit and basically said: “Not for me. I’m out.” I am more than sympathetic to that response. I think it’s actually kind of a logical response.

But as a person who is skeptical for a living, I think the technology is remarkable. I’m also a believer that it’s entering a fractured system that makes the likelihood of its misuse enormous.

Thompson: Unlike other general purpose technologies, the capabilities of AI are exquisitely local. You think about a train or electricity. If a train takes a bushel of wheat in 1870 from Chicago to New York, everybody can agree on what that train has done. But you compare that to generative artificial intelligence, where every interaction is unique. Every prompt is unique. So you have software programmers using the most recent editions of Claude Code or Codex from OpenAI claiming that their work has changed forever. But many good-faith people can have a really negative experience with this technology.

Tyrangiel: We’ll start with journalists. We are them and we know them. A lot of their initial experience was, “Oh, I can’t trust this.” The writing from some of these tools is still a little corny. Then I talk to my friends who work in financial services. They say this is the brain they’ve always wanted to have at my side—in terms of the capability to do arbitrage, to create decks that used to take me six hours, it’s taking six minutes, and now I can actually do the thinking that I want to do. In medicine, scribe technology listens to patient-doctor interactions, notes everything, fills out electronic health records, fills out prescriptions. Doctors have to make a subtle change: They have to vocalize the entire exam. Younger doctors have figured out [how to use the technology] to save an hour or two a day. Older doctors don’t want to change their workflow.

THREE SCENARIOS FOR AI, THE ECONOMY, AND JOBS

Thompson: I want us to consider three scenarios for AI and the economy. In the first scenario, AI won’t cause much job displacement at all. In the second scenario, changes will be significant but slow. In the third scenario, things move very fast.

Let’s start with scenario one. With technology like Excel, journalists decades ago might have said that [once spreadsheets are automated] there’s not going to be anybody working with spreadsheets in the future. Absolutely wrong. Everybody works with Excel. This technology didn’t destroy jobs at all. It amplified jobs. So walk me through this possibility that artificial intelligence is in some ways a normal technology that will essentially sit with knowledge workers in the future without actually replacing them from the labor force.

Tyrangiel: There are a lot of economists in particular who say that’s exactly what’s going to happen. There’ll be a little period of adjustment, and then we’re going to create even more jobs, even better jobs with even higher wages.” The case for that is that the tech takes a while to move into our various industries and systems. We can imagine a slow pivot. With the doctor example, doctors save an hour or two a day thanks to scribe technology. That reduces attrition, which is a huge factor in employment in the medical field. It gives us a massive corpus of data that we can now use to create more jobs. One of the fastest growing jobs in the 21st century is data analysis and data visualization. Now we have this massive corpus of public health data. Think of all the new jobs that can be created analyzing that. Those are the rosy scenarios.

Thompson: Practically every general purpose technology has been adopted slowly. The canonical example is the telephone. Patented in 1876. In the first few years, basically no telephones are manufactured because talking across long distances is not a part of anybody’s life. The adoption of the telephone did not pass 50% of American households until the 1940s, according to our best records. So it took 70 years for half the Americans to essentially pick up the phone. That is the typical story of general purpose technology adoption. Tell the story of electricity.

Tyrangiel: Electricity is the technology of the last two centuries. It basically took four or five decades to be reasonably dispersed across America, which is a very powerful and very capital friendly country. And part of the reason is that when you create something that new, there are a lot of vested interests. There are lots of factories that literally were built on steam. And when I say built on steam, I mean they constructed a steam engine in the basement and then they built the factory on top of the steam engine. So when you hear as a factory owner, “Hey, we’ve got this brand new tech, it’s so much easier, it’s going to be cheaper.” You’re like, “ Well, anyway, I’ve got this building and I spent all of my money to create the building. Call me when I can plug in.”And so a lot of those factory owners waited and they waited for their materials to become obsolete. They also waited for the government to pay for the rollout of electricity. Electricity largely was subsidized particularly in rural areas by the government saying, “We’re going to get everybody connected to electricity.”

Now, I’ll give you the counter. The counter is that AI is already probably the fastest growing consumer technology in the history of technology. The other point that people like [the economist] Anton Korinek and others will get to is that electricity didn’t roll itself out. It was a material and it required all of these human hands to build the infrastructure, to create electrification in factories and in homes and create a grid. [With AI, however,] these are smart machines that, given the proper instruction, can create a pathway for other smart AI technology to infiltrate its way into your home, into your factory, into your enterprise. That’s going to take us some time to get our heads around.

THE SINGLE BEST REASON WHY AI WON’T DESTROY JOBS

Derek: The strongest reason why artificial intelligence is either not going to displace jobs at all or going to have a slow impact in the labor force is one word: History. History will repeat itself. Even today unemployment is under 5%. So it’s very difficult to say that this technology has already had some kind of significant displacing effect.

This brings us to the scenario that you focused on most in your piece, which is the case that this time is different and that AI could in fact move very, very quickly. According to Pew, 50% to 70% of Americans say they use this technology every week. People were not using light bulbs three years after Edison patented them in the 1870s. So maybe this is not electricity. What is the strongest argument that you’ve heard that this time will be different?

Tyrangiel: I think there’s two, and I want to separate them because it’s really important to understand the difference. One is technology based and one is Wall Street based. And that’s literally like the fire in the cave and the shadow that the fire casts.

So let’s talk about the tech. There’s a term called recursive AI, which means AI that can teach itself. Now, there’s a lot of debate for all sorts of reasons about whether we have achieved recursive AI, but it’s sort of moot. AI is capable of rolling out AI. And what we’re seeing with Claude Code, which is another sort of really important development that sort of came to market last fall, is that you can actually tell an AI, ‘I need to install AI in this function. I want to connect it to this data set inside your company, inside your life. Make little life hacks.”

You can easily understand how generative AI can replace the average customer service function. You train a generative AI on lots and lots of your manuals and tools and protocols. In some cases, it’s actually way better than the human friction of customer service. You can see how those jobs might go away this year. But you can also begin to see how data-driven companies use AI to essentially run all sorts of different functions really, really quickly. Because the AI is actually helping you roll it out, troubleshoot it, and adapt very quickly with limited human interaction. So I do see that as a very real thing that’s happening this year.

The other argument is that a lot of traditional Fortune 500 companies that have invested in AI have spent billions of dollars to catch up and implement systems. Even if it takes another year or two for those systems to be perfect, the pressure is on those CEOs to show results. When I spoke to a bunch of CEOs, they said: “Look, I actually like my workforce. I actually think this could take time and we could perfect it. Wall Street has no patience for that. They’re expecting me to show financial results now.” And the way they show financial results fastest is by cutting jobs and replacing those jobs with automation, even if the automation isn’t perfect. And so that is the shadow on the cave wall that concerns me even more than the speed of disbursement of AI.

A VERY SPECIFIC DESCRIPTION OF WHY AI COULD QUICKLY DESTROY ENTERPRISE VALUE AND JOBS

Thompson: Let’s get specific here. Walk me through a plausible story. How could the application of this technology reduce employment in, say, consulting in the next few years?

Tyrangiel: Let’s not be shy about it.

User's avatar

Continue reading this post for free, courtesy of Derek Thompson.

Or purchase a paid subscription.
© 2026 Derek Thompson · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture