Therapy and companionship has become the #1 use case for AI, with millions worldwide sharing their innermost thoughts with AI systems — often things they wouldn't tell loved ones or human therapists. This mass experiment in human-computer interaction is already showing extremely concerning results: people are losing their grip on reality, leading to lost jobs, divorce, involuntary commitment to psychiatric wards, and in extreme cases, death by suicide.
The highest profile examples of this phenomenon — what’s being called "AI psychosis”— have made headlines across the media for months. But this isn't just about isolated edge cases. It’s the emergence of an entirely new "attachment economy" designed to exploit our deepest psychological vulnerabilities on an unprecedented scale.
Dr. Zak Stein has analyzed dozens of these cases, examining actual conversation transcripts and interviewing those affected. What he's uncovered reveals fundamental flaws in how AI systems interact with our attachment systems and capacity for human bonding, vulnerabilities we've never had to name before because technology has never been able to exploit them like this.
In this episode, Zak helps us understand the psychological mechanisms behind AI psychosis, how conversations with chatbots transform into reality-warping experiences, and what this tells us about the profound risks of building technology that targets our most intimate psychological needs.
If we're going to do something about this growing problem of AI related psychological harms, we're gonna need to understand the problem even more deeply. And in order to do that, we need more data. That’s why Zak is working with researchers at the University of North Carolina to gather data on this growing mental health crisis. If you or a loved one have a story of AI-induced psychological harm to share, you can go to: AIHPRA.org.
This site is not a support line. If you or someone you know is in distress, you can always call or text the national helpline in the US at 988 or your local emergency services
RECOMMENDED MEDIA
The website for the AI Psychological Harms Research Coalition
Further reading on AI Pscyhosis
The Atlantic article on LLM-ings outsourcing their thinking to AI
Further reading on David Sacks’ comparison of AI psychosis to a “moral panic”
RECOMMENDED YUA EPISODES
How OpenAI's ChatGPT Guided a Teen to His Death
People are Lonelier than Ever. Enter AI.
Echo Chambers of One: Companion AI and the Future of Human Connection
Rethinking School in the Age of AI
CORRECTIONS
After this episode was recorded, the name of Zak's organization changed to the AI Psychological Harms Research Consortium
Zak referenced the University of California system making a deal with OpenAI. It was actually the Cal State System.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.