10 Comments
User's avatar
CDinWeChe's avatar

This piece reminds my of Jonathan Haidt's three great untruths, one of which is "always trust your feelings." This untruth is manifest in today's culture through concepts such as safe spaces, micro-aggressions, trigger warnings and so forth, which seek to protect young people from normal human discomforts. Problem is that these discomforts are learning experiences that help socialize people and prepare them to thrive in a world that is often not kind or caring, or at least not particularly interested in their problems or anxieties. It seems that chatbot therapists that always validate the feelings of clients (if that's the right word) will simply exacerbate a problem with young people that is already quite real, and people will continue down a maladaptive path of alienation from the society in which they live.

Expand full comment
Substack Joe's avatar

Those interested in this topic may find this formulation of accumulative vs. decisive risk, interesting: https://link.springer.com/article/10.1007/s11098-025-02301-3

The social/political disruption strikes me as far more problematic at the moment than the more sci-fi world ending scenarios. Human’s (believe it or not) have done lots of violent and silly things based on disinformation/misinformation. Having your best bud chattyG feeding you information daily at rapid pace probably isn’t going to help.

Expand full comment
Brendan B's avatar

AI has the potential to be a catastrophe, which social media already is, I'd argue. Convenience and the algorithms lure people into spending countless hours on social media, but at least the conversations are with real people, and sometimes even unpleasant, you know, like real life. With AI you combine the always-available-algorithmic-encouragement-for-profit-maximization nature of social media with the ability to strip out anything you might find the slightest bit objectionable. It'll be like jumping from alcohol to fentanyl, and will make our social media addiction seem quaint by comparison.

Expand full comment
Christine Fernandez's avatar

Along with continually refining AI to overcome these problems - and articles like yours are important alarms - we need to develop mandatory educational programs that help children and adults understand the limitations of AI AND learn how to counteract them with reality checks among trusted family and friends and trained professionals.

We also need to establish global committees who have expertise and authority over the individual corporate players, and ensure that AI tools primarily benefit humanity, not just the bottom line. In today’s politically fractured environment, that is like asking the world to come together on climate change - near impossible. Still, we have to try.

Expand full comment
Brendan B's avatar

Technology, pretty much by definition, makes our lives easier. And often that is the problem with it. You do not build meaning in your life by having everything done for you, by not working through any challenges. At its best, technology relieves us of tasks that were pure drudgery, the completion of which did not actually enrich our lives or leave us with a sense of accomplishment. Even then there are often unintended consequences, such as a drop in social cohesion and personal resilience. But now that the drudgery we're about to be relived of is using our own brains to think for ourselves and interact with other people, we are about to see people atrophy mentally as badly as we have already atrophied physically.

Expand full comment
Twirling Towards Freedom's avatar

"Personality at scale". I've thought about the harms of chatbots, but honestly that hadn't crossed my mind, and its absolutely a problem.

There's a terrific video clip of the writer Freya India that's going viral right now where she talks about the dangers of having such a frictionless society.

https://substack.com/@gurwinder/note/c-143023964

Expand full comment
mathew's avatar

Real people are imperfect. They have bad days, they get frustrated.They get angry, sometimes they snap at you.Sometimes they disagree with you

Learning how to deal with this is important.These are real connections with real people.

It is bad for you to try and replace them with a fake person.

Expand full comment
David Roberts's avatar

AI is not going away so parents and teachers will have to model and teach the right ways to use it and warn against the wrong ways. I don't see any other way to address AI's downsides. As you suggest, panic is not the answer.

Expand full comment
MB's avatar

My Reddit feed surfaced a surprising number of threads this past week with users extremely upset about ChatGPT 4o being removed (to the point that OpenAI brought it back).

I mainly use AI for technical use cases (coding, stats, etc.) or as an enhanced Google search for fact finding type questions. I found 4o to be pretty terrible because it would often hallucinate and would never challenge my ideas—its agreeableness was off the charts in a detrimental way (I’ve found Gemini and o3 to be better critical thinkers).

And this makes sense in the context of Dereks article. A lot of the upset users found 4o to be reassuring with abundant positivity and agreeableness to the point they became dependent on the artificial support they received from it.

I can see the value to having some sort of a companion, but do think the “personality” of the bot matters a lot. And leaving that up to a small handful of people to decide based on system prompts is very scary.

Expand full comment
mathew's avatar

The value of such a companion is fake.It's not helping you, it's hurting you

Expand full comment