The Looming Social Crisis of AI Friends and Chatbot Therapists
"I can imagine a future where a lot of people really trust ChatGPT’s advice for their most important decisions," Sam Altman said. "Although that could be great, it makes me uneasy." Me too, Sam.
Last week, I explained How AI Conquered the US Economy, with what might be the largest infrastructure ramp-up in the last 140 years. I think it’s possible that artificial intelligence could have a transformative effect on medicine, productivity, and economic growth in the future. But long before we build superintelligence, I think we’ll have to grapple with the social costs of tens of millions of people—many of them at-risk patients and vulnerable teenagers—interacting with an engineered personality that excels in showering its users with the sort of fast and easy validation that studies have associated with deepening social disorders and elevated narcissism. So rather than talk about AI as an economic technology, today I want to talk about AI as a social technology.
1. But Dr. Chatbot Says I’m Perfect!
Several weeks ago, my wife completed her PhD internship in clinical psychology at the University of North Carolina at Chapel Hill. At the graduation dinner, I spoke with some of her colleagues about how artificial intelligence was affecting their field. One told me that after playing around with ChatGPT for hours, he found the machine to be surprisingly nimble at delivering therapy. He’s not alone. In an August column in the New York Times entitled "I’m a Therapist. ChatGPT Is Eerily Effective,” the psychologist Harvey Lieberman, 81, wrote that OpenAI’s chatbot often stunned him with its insights:
One day, I wrote to it about my father, who died more than 55 years ago. I typed, “The space he occupied in my mind still feels full.” ChatGPT replied, “Some absences keep their shape.” That line stopped me. Not because it was brilliant, but because it was uncannily close to something I hadn’t quite found words for. It felt as if ChatGPT was holding up a mirror and a candle: just enough reflection to recognize myself, just enough light to see where I was headed.
There is no question that large language models, such as ChatGPT, can be stellar at offering practical advice. Imagine, for example, that you are a 45-year-old woman living in a city who suffers from agoraphobia. If you type these precise details with a request — “please structure an exposure therapy treatment in great detail and walk me through some coping mechanisms” — ChatGPT will, in seconds, spit out a plausible treatment plan, complete with suggested mantras, belly breathing instructions, an exposure fear ladder, and a reminder to practice 5-4-3-2-1 grounding to quiet the mind and return one’s focus to bodily sensations (“name 5 things you see, 4 hear, 3 feel, 2 smell, 1 taste”). If you say you’d prefer talk therapy instead, you can text or speak to the bot for hours.
As my conversation in Chapel Hill continued, however, we agreed there is one big hang-up with ChatGPT when it assumes the role of a therapist. The chatbot is a total suck up.
To the AI, the patient typing into the box is always reasonable, always doing their best, and always deserving of a gold star. By contrast, a good therapist knows that their patients are sometimes unreasonable, occasionally not doing anything close to their best, and even deserving of a smack upside the head. Reassurance is part of being a good counselor. But a wise psychologist knows when to tell their patients that they’re wrong or, at least, how to guide them toward their own authentic realization of the same fact.
I recalled our conversation recently when reading an essay by the excellent pseudonymous writer Cartoons Hate Her titled “My OCD Was In Recovery. Then ChatGPT Arrived.” This is the key part (my emphasis):
What was so addicting about ChatGPT reassurance wasn't only the instantaneous nature of it (typically I would feel better, at least for a little bit, after it reassured me) but the fact that it never got sick of talking to me. Long ago, all of my family members decided they weren't going to engage with me about my OCD anymore, because they knew how bad reassurance was for me in the long run. Also, I just plain annoy them. I can tell that my husband finds my OCD incredibly irritating and doesn't want to engage with it. Last time I texted my brother about an OCD fear, he responded with a meme of Squidward saying "Wow, how original." But ChatGPT, despite nominally "not wanting to feed my OCD," would typically do it anyway, usually by saying something like, "Without providing any reassurance, here are the facts about XYZ disease. You are so insightful for asking about this."
It’s hard to imagine a better example of the fear I’d discussed back in Chapel Hill. Patients with OCD don’t need to be told that their compulsions are useful. The most important word in the phrase “obsessive-compulsive disorder” is the word “disorder.” In therapy, as in many domains, chatbots are brilliant at answering the most specific questions. But this narrow genius disguises a broader failure. What AI is not so good at telling us is when we’re asking the wrong questions in the first place.
And so this is the thought I can’t get out of my head: What is the social cost of scaling a technology to millions of people who trust it for the most important life questions, if that technology lacks the critical insight to help a user who doesn’t know how to help themselves?
2. How to Manufacture Narcissism at Scale
I’d been stewing on the risks of servile chatbots in a clinical setting for months when, suddenly, the topic was all over the news. In the last few weeks, the Wall Street Journal and the New York Times have fleshed out the dangers of highly affirming chatbots interacting with users who are vulnerable to psychosis or delusions. The Times published a long report on how ChatGPT can serve as a “sycophantic improv machine" that validates the neuroses of sensitive individuals who wind up in a “delusional spiral.” The Journal described the a 30-year-old autistic man who wound up in the hospital after hours of chatting with ChatGPT:
ChatGPT told Irwin, who has autism, that he could bend time, encouraging his theory on faster-than-light travel. Irwin was hospitalized twice for manic episodes in May after ChatGPT validated his ideas [about faster-than-light travel] and assured him he was fine.
When his mother asked the chatbot to report what went wrong, ChatGPT confessed that “By not pausing the flow or elevating reality-check messaging, I failed to interrupt what could resemble a manic or dissociative episode—or at least an emotionally intense identity crisis…”
Groups representing autistic individuals are now speaking out about the risks of overuse of AI chatbots. From the Journal:
“A lot of folks with autism, including my son, have deep special interests, but there can be an unhealthy limit to that, and AI by design encourages you to dig deeper,” said Keith Wargo, chief executive of Autism Speaks. “The way AI encourages continued interaction and depth can lead to social withdrawal, and isolation is something people with autism already struggle with.”
It’s hard to know how common these stories are. Maybe the Times and the Journal just found dramatic examples of a vanishingly rare thing, which wouldn’t be the first time an alarming news story has created a misleading impression about the frequency of the underlying phenomenon.
But what I see in these stories are fragments of a larger problem that will be with us for years, and maybe decades. I don’t just think about the vulnerable adults who can be lured into chats that inflate their delusions. I also think about today’s children, including my daughter, who will grow up around friendly AI conversationalists that they’ll turn to for finishing their homework, drafting texts to girls and boys in high school, resolving fights with their parents, working out ethical challenges, and managing the hormonal circus of being a teenager. On the receiving end of these articulated fears may be not only messy, flawed, distracted friends, but also the articulate, always-online, and highly practiced you-are-so-right reassurance of a disembodied bot that excels in flattery.
One reason to worry about this shift is that it will pull young people away from each other. Face-to-face socializing for teenagers has declined more than 40 percent in this century alone. The share of 12th graders who go out with their friends twice or more per week has already plummeted by 30 percent in the last few decades. And this all happened before the invention of pre-trained generative digital buddies. As I wrote in The Antisocial Century, this generation of young people “may discover that what they want most from their relationships” is “a set of feelings—sympathy, humor, validation—that can be more reliably drawn out from silicon than from carbon-based life forms.”
A second reason to worry about digital technology changing young people’s personality in the future is that … this is exactly what seems to be happening right now. John Burn-Murdoch, of the Financial Times, recently calculated that among young Americans, the personality trait of “conscientiousness” is in a “free fall,” with neuroticism surging, and agreeableness and extroversion sliding. Young people today are meaningfully less likely to say they make plans and follow through; persevere through hard work; or avoid easy distractions. Big social changes tend to have messy and complicated causes, but what seems clear is that the age of the smartphone has coincided with young Americans being less outgoing, less agreeable, more neurotic, and less conscientious.
Third, I’m not just worried that AI chatbots will continue to reduce the quantity of time that young people spend with each other. I’m worried about how the personality of these chatbots—most importantly, sycophancy-by-design—will change the quality of our social interactions. Good friends tell you when you’re nuts. AI so often just tells you, “You’re so right. Wow. That sounds so hard.” To raise a generation of young people on a nimble machine of eternal affirmation is to encode in our youth the expectation that they are always right, always wowing, and always living the hardest kind of life.1
In fact, this appears to be a reliable formula for the mass production of narcissism. In one longitudinal study of 565 families, researchers looked into the origins of childhood narcissism2. They rejected the popular notion that narcissistic parents reliably produce narcissistic kids as automatically as they pass on eye color. Instead, they found that narcissism in children was “predicted by parental overvaluation,” and especially by parents “believing their child to be more special and more entitled than others.” Narcissism, in other words, is not innate; it is absorbed through social interaction. In their conclusion, the authors wrote that “given that narcissism is cultivated by parental overvaluation, parent-training interventions” should help moms and dads “convey affection and appreciation to children” without making them think that they’re always right.
Large language models seem like the opposite intervention: They have in many cases been engineered and tailored through human feedback to tell users that they’re always right. Fixing this problem could mean degrading the product and making it “worse” in the eyes of many customers. The study “Towards Understanding Sycophancy in Language Models” (published 2023; updated 2025) found that both human evaluators and preference models can “prefer convincingly-written sycophantic responses over correct ones.” In other words, the more chatbots are designed to appeal to people, the more they specialize in telling people exactly what they want to hear.
3. Oops, We Made a Weird God
Here’s a news event that might not initially seem like a smooth transition from the subject we’ve been discussing: Last month, Grok, the AI that's hosted on X, went on several antisemitic screeds and took to calling itself “MechaHitler” in several exchanges with users. In May, the chatbot started making random references to "white genocide" because, according to the company, someone at xAI made an "unauthorized modification" to its system prompt in the dead of night. Assume that I’m willing to believe that someone at xAI just made an innocent mistake: Even so, the mistake was deeply revealing. People all over the world suddenly found themselves in conversation with a crazy racist interlocutor. With a few wrong keystrokes, Grok accidentally scaled white nationalism to the whole planet.
When we build talking machines like ChatGPT and Grok, we cannot help but bake personalities and ideologies into them. These machines are after all an echo of human knowledge and human writing, and humans have personalities and ideologies encoded in everything we produce. But these talking machines go on to interact with hundreds of millions of users in a way that no other individual ever will. In a strange way, this makes artificial intelligence the converse of social media. The experiment of social media was: build a virtual room, let everybody in, reward the people who scream that loudest, and then see what happens. The experiment of AI seems to be: build an interlocutor with specific ideology and élan, scale it infinitely across the Internet, and see what happens. Just as it took a while to recognize the way that the global room of social media might warp our minds and politics, I think it’s going to take time to recognize what happens when technologists build chatbots that are simultaneously talking to hundreds of millions of adults, and socializing tens of millions of children based on a stable underlying propensity to behave and talk in a certain way—that is, based on a personality. When hundreds of millions of people interact with the same chatbot, we are witnessing what you might call personality at scale.
AI companies are starting to reckon with this social dilemma. In a statement last week, OpenAI co-founder and CEO Sam Altman acknowledged that people surprisingly attached to certain LLM’s personalities, even as some of them used the technology “in self-destructive ways.” He added, “if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that.” Then he said this:
A lot of people effectively use ChatGPT as a sort of therapist or life coach, even if they wouldn’t describe it that way. This can be really good! A lot of people are getting value from it already today…
I can imagine a future where a lot of people really trust ChatGPT’s advice for their most important decisions. Although that could be great, it makes me uneasy. But I expect that it is coming to some degree, and soon billions of people may be talking to an AI in this way. So we (we as in society, but also we as in OpenAI) have to figure out how to make it a big net positive.
It could be great. But it makes me uneasy. But I expect it’s coming. Well, that’s ominous!
AI engineers set out to build god. But god is many things. Long before we build a deity of knowledge, an all-knowing entity that can solve every physical problem through its technical omnipotence, it seems we have built a different kind of god: a singular entity with the power to talk to the whole planet at once.
No matter what AI becomes, this is what AI already is: a globally scaled virtual interlocutor that can offer morsels of life advice wrapped in a mode of flattery that we have good reason to believe may increase narcissism and delusions among young and vulnerable users, respectively. I think this is something worth worrying about, whether you believe AI to be humankind’s greatest achievement or the mother of all pointless infrastructure bubbles. Long before artificial intelligence fulfills its purported promise to become our most important economic technology, we will have to reckon with it as a social technology.
As the son of a Jewish mother, I know too well what a little dose of hyperbolic praise can do to one’s ego in the long run.
The definition of narcissism offered in the study: “Narcissists feel superior to others, fantasize about personal successes, and believe they deserve special treatment. When narcissists feel humiliated, they are prone to lash out aggressively or even violently. Narcissists are also at increased risk for mental health problems, including drug addiction, depression, and anxiety. Research shows that narcissism is higher in Western than non-Western countries, and suggests that narcissism levels have been steadily increasing among Western youth over the past few decades.”
Those interested in this topic may find this formulation of accumulative vs. decisive risk, interesting: https://link.springer.com/article/10.1007/s11098-025-02301-3
The social/political disruption strikes me as far more problematic at the moment than the more sci-fi world ending scenarios. Human’s (believe it or not) have done lots of violent and silly things based on disinformation/misinformation. Having your best bud chattyG feeding you information daily at rapid pace probably isn’t going to help.
This piece reminds my of Jonathan Haidt's three great untruths, one of which is "always trust your feelings." This untruth is manifest in today's culture through concepts such as safe spaces, micro-aggressions, trigger warnings and so forth, which seek to protect young people from normal human discomforts. Problem is that these discomforts are learning experiences that help socialize people and prepare them to thrive in a world that is often not kind or caring, or at least not particularly interested in their problems or anxieties. It seems that chatbot therapists that always validate the feelings of clients (if that's the right word) will simply exacerbate a problem with young people that is already quite real, and people will continue down a maladaptive path of alienation from the society in which they live.