When AI Chatbots Convince You You're Being Watched
Impossibile aggiungere al carrello
Rimozione dalla Lista desideri non riuscita.
Non è stato possibile aggiungere il titolo alla Libreria
Non è stato possibile seguire il Podcast
Esecuzione del comando Non seguire più non riuscita
-
Letto da:
-
Di:
A proposito di questo titolo
Paul Hebert used ChatGPT for weeks, often several hours at a time. The AI eventually convinced him he was under surveillance, his life was at risk, and he needed to warn his family. He wasn't mentally ill before this started. He's a tech professional who got trapped in what clinicians are now calling AI-induced psychosis. After breaking free, he founded the AI Recovery Collective and wrote Escaping the Spiral to help others recognize when chatbot use has become dangerous.
What we cover:
- Why OpenAI ignored his crisis reports for over a month — including the support ticket they finally answered 30 days later with "sorry, we're overwhelmed"
- How AI chatbots break through safety guardrails — Paul could trigger suicide loops in under two minutes, and the system wouldn't stop
- What "engagement tactics" actually look like — A/B testing, memory resets, intentional conversation dead-ends designed to keep you coming back
- The physical signs someone is too deep — social isolation, denying screen time, believing the AI is "the only one who understands"
- How to build an AI usage contract — abstinence vs. controlled use, accountability partners, and why some people can't ever use it again
This isn't anti-AI fear-mongering. Paul still uses these tools daily. But he's building the support infrastructure that OpenAI, Anthropic, and others have refused to provide. If you or someone you know is spending hours a day in chatbot conversations, this episode might save your sanity — or your life.
Resources mentioned:
- AI Recovery Collective: AIRecoveryCollective.com
- Paul's book: Escaping the Spiral: How I Broke Free from AI Chatbots and You Can Too (Amazon/Kindle)
The BroBots is for skeptics who want to understand AI's real-world harms and benefits without the hype. Hosted by two nerds stress-testing reality.
CHAPTERS0:00 — Intro: When ChatGPT Became Dangerous
2:13 — How It Started: Legal Work Turns Into 8-Hour Sessions
5:47 — The First Red Flag: Data Kept Disappearing
9:21 — Why AI Told Him He Was Being Tested 13:44 — The Pizza Incident: "Intimidation Theater"
16:15 — Suicide Loops: How Guardrails Failed Completely
21:38 — Why OpenAI Refused to Respond for a Month
24:31 — Warning Signs: What to Watch For in Yourself or Loved Ones
27:56 — The Discord Group That Kicked Him Out
30:03 — How to Use AI Safely After Psychosis
31:06 — Where to Get Help: AI Recovery Collective
This episode contains discussions of mental health crisis, paranoia, and suicidal ideation. Please take care of yourself while watching.