Episodi

  • Attachment Hacking and the Rise of AI Psychosis
    Jan 21 2026

    Therapy and companionship has become the #1 use case for AI, with millions worldwide sharing their innermost thoughts with AI systems — often things they wouldn't tell loved ones or human therapists. This mass experiment in human-computer interaction is already showing extremely concerning results: people are losing their grip on reality, leading to lost jobs, divorce, involuntary commitment to psychiatric wards, and in extreme cases, death by suicide.

    The highest profile examples of this phenomenon — what’s being called "AI psychosis”— have made headlines across the media for months. But this isn't just about isolated edge cases. It’s the emergence of an entirely new "attachment economy" designed to exploit our deepest psychological vulnerabilities on an unprecedented scale.

    Dr. Zak Stein has analyzed dozens of these cases, examining actual conversation transcripts and interviewing those affected. What he's uncovered reveals fundamental flaws in how AI systems interact with our attachment systems and capacity for human bonding, vulnerabilities we've never had to name before because technology has never been able to exploit them like this.

    In this episode, Zak helps us understand the psychological mechanisms behind AI psychosis, how conversations with chatbots transform into reality-warping experiences, and what this tells us about the profound risks of building technology that targets our most intimate psychological needs.

    If we're going to do something about this growing problem of AI related psychological harms, we're gonna need to understand the problem even more deeply. And in order to do that, we need more data. That’s why Zak is working with researchers at the University of North Carolina to gather data on this growing mental health crisis. If you or a loved one have a story of AI-induced psychological harm to share, you can go to: AIHPRA.org.

    This site is not a support line. If you or someone you know is in distress, you can always call or text the national helpline in the US at 988 or your local emergency services

    RECOMMENDED MEDIA

    The website for the AI Psychological Harms Research Coalition

    Further reading on AI Pscyhosis

    The Atlantic article on LLM-ings outsourcing their thinking to AI

    Further reading on David Sacks’ comparison of AI psychosis to a “moral panic”

    RECOMMENDED YUA EPISODES

    How OpenAI's ChatGPT Guided a Teen to His Death
    People are Lonelier than Ever. Enter AI.
    Echo Chambers of One: Companion AI and the Future of Human Connection

    Rethinking School in the Age of AI

    CORRECTIONS

    After this episode was recorded, the name of Zak's organization changed to the AI Psychological Harms Research Consortium

    Zak referenced the University of California system making a deal with OpenAI. It was actually the Cal State System.






    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Mostra di più Mostra meno
    51 min
  • What Would It Take to Actually Trust Each Other? The Game Theory Dilemma
    Jan 8 2026

    So much of our world today can be summed up in the cold logic of “if I don’t, they will.” This is the foundation of game theory, which holds that cooperation and virtue are irrational; that all that matters is the race to make the most money, gain the most power, and play the winning hand.

    This way of thinking can feel inescapable, like a fundamental law of human nature. But our guest today argues that it doesn’t have to be this way. That the logic of game theory is a human invention, a way of thinking that we’ve learned — and that we can unlearn by daring to trust each other again. It’s critical that we do, because AI is the ultimate agent of game theory and once it’s fully entangled we might be permanently stuck in the game theory world.

    In this episode, Tristan and Aza explore the game theory dilemma — the idea that if I adopt game theory logic and you don’t, you lose — with Dr. Sonja Amadae, a professor of Political Science at the University of Helsinki. She's also the director at the Center for the Study of Existential Risk at the University of Cambridge and the author of “Prisoners of Reason: Game Theory and the Neoliberal Economy.”

    RECOMMENDED MEDIA

    “Prisoners of Reason: Game Theory and the Neoliberal Economy” by Sonja Amadae (2015)

    The Cambridge Centre for the Study of Existential Risk

    “Theory of Games and Economic Behavior” by John von Neumann and Oskar Morgenstern (1944)

    Further reading on the importance of trust in Finland

    Further reading on Abraham Maslow’s Hierarchy of Needs

    RAND’s 2024 Report on Strategic Competition in the Age of AI

    Further reading on Marshall Rosenberg and nonviolent communication

    The study on self/other overlap and AI alignment cited by Aza

    Further reading on The Day After (1983)

    RECOMMENDED YUA EPISODES

    America and China Are Racing to Different AI Futures

    The Crisis That United Humanity—and Why It Matters for AI

    Laughing at Power: A Troublemaker’s Guide to Changing Tech

    The Race to Cooperation with David Sloan Wilson

    Clarifications:

    • The proposal for a federal preemption on AI was enacted by President Trump on December 11, 2025, shortly after this recording.
    • Aza said that "The Day After" was the most watched TV event in history when it aired. It was actually the most watched TV film, the most watched TV event was the finale of MASH

    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Mostra di più Mostra meno
    45 min
  • America and China Are Racing to Different AI Futures
    Dec 18 2025

    Is the US really in an AI race with China—or are we racing toward completely different finish lines?

    In this episode, Tristan Harris sits down with China experts Selina Xu and Matt Sheehan to separate fact from fiction about China's AI development. They explore fundamental questions about how the Chinese government and public approach AI, the most persistent misconceptions in the West, and whether cooperation between rivals is actually possible. From the streets of Shanghai to high-level policy discussions, Xu and Sheehan paint a nuanced portrait of AI in China that defies both hawkish fears and naive optimism.

    If we're going to avoid a catastrophic AI arms race, we first need to understand what race we're actually in—and whether we're even running toward the same finish line.

    Note: On December 8, after this recording took place, the Trump administration announced that the Commerce Department would allow American semiconductor companies, including Nvidia, to sell their most powerful chips to China in exchange for a 25 percent cut of the revenue.

    RECOMMENDED MEDIA

    “China's Big AI Diffusion Plan is Here. Will it Work?” by Matt Sheehan

    Selina’s blog

    Further reading on China’s AI+ Plan

    Further reading on the Gaither Report and the missile gap

    Further Reading on involution in China

    The consensus from the international dialogues on AI safety in Shanghai

    RECOMMENDED YUA EPISODES

    The Narrow Path: Sam Hammond on AI, Institutions, and the Fragile Future

    AI Is Moving Fast. We Need Laws that Will Too.

    The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao


    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Mostra di più Mostra meno
    58 min
  • AI and the Future of Work: What You Need to Know
    Dec 4 2025

    No matter where you sit within the economy, whether you're a CEO or an entry level worker, everyone's feeling uneasy about AI and the future of work. Uncertainty about career paths, job security, and life planning makes thinking about the future anxiety inducing. In this episode, Daniel Barcay sits down with two experts on AI and work to examine what's actually happening in today's labor market and what's likely coming in the near-term. We explore the crucial question: Can we create conditions for AI to enrich work and careers, or are we headed toward widespread economic instability?

    Ethan Mollick is a professor at the Wharton School of the University of Pennsylvania, where he studies innovation, entrepreneurship, and the future of work. He's the author of Co-Intelligence: Living and Working with AI.

    Molly Kinder is a senior fellow at the Brookings Institution, where she researches the intersection of AI, work, and economic opportunity. She recently led research with the Yale Budget Lab examining AI's real-time impact on the labor market.

    RECOMMENDED MEDIA

    Co-Intelligence: Living and Working with AI by Ethan Mollick

    Further reading on Molly’s study with the Yale Budget Lab

    The “Canaries in the Coal Mine” Study from Stanford’s Digital Economy Lab

    Ethan’s substack One Useful Thing

    RECOMMENDED YUA EPISODES
    Is AI Productivity Worth Our Humanity? with Prof. Michael Sandel

    We Have to Get It Right’: Gary Marcus On Untamed AI

    AI Is Moving Fast. We Need Laws that Will Too.

    Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins

    CORRECTIONS

    1. Ethan said that in 2022, experts believed there was a 2.5% chance that ChatGPT would be able to win the Math Olympiad. However, that was only among forecasters with more general knowledge (the exact number was 2.3%). Among domain expert forecasters, the odds were an 8.6% chance.
    2. Ethan claimed that over 50% of Americans say that they’re using AI at work. We weren’t able to independently verify this claim and most studies we found showed lower rates of reported use of AI with American workers. There are reports from other countries, notably Denmark, which show higher rates of AI use.
    3. Ethan indirectly quoted the Walmart CEO Doug McMillon as having a goal to “keep all 3 million employees and to figure out new ways to expand what they use.” In fact, McMillon’s language on AI has been much softer, saying that “AI is expected to create a number of jobs at Walmart, which will offset those that it replaces.” Additionally, Walmart has 2.1 million employees, not 3.

    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Mostra di più Mostra meno
    45 min
  • Feed Drop: "Into the Machine" with Tobias Rose-Stockwell
    Nov 13 2025

    This week, we’re bringing you Tristan’s conversation with Tobias Rose-Stockwell on his podcast “Into the Machine.”  Tobias is a designer, writer, and technologist and the author of the book “The Outrage Machine.”

    Tobias and Tristan had a critical, sobering, and surprisingly hopeful conversation about the current path we’re on AI and the choices we could make today to forge a different one. This interview clearly lays out the stakes of the AI race and helps to imagine a more humane AI future—one that is within reach, if we have the courage to make it a reality.

    If you enjoyed this conversation, be sure to check out and subscribe to “Into the Machine”:

    YouTube: Into the Machine Show

    Spotify: Into the Machine

    Apple Podcasts: Into the Machine

    Substack: Into the Machine

    You may have noticed on this podcast, we have been trying to focus a lot more on solutions. Our episode last week imagined what the world might look like if we had fixed social media and all the things that we could've done in order to make that possible. We'd really love to hear from you about these solutions and any other questions you're holding. So please, if you have more thoughts or questions, send us an email at undivided@humanetech.com.


    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Mostra di più Mostra meno
    1 ora e 5 min
  • What if we had fixed social media?
    Nov 6 2025

    We really enjoyed hearing all of your questions for our annual Ask Us Anything episode. There was one question that kept coming up: what might a different world look like? The broken incentives behind social media, and now AI, have done so much damage to our society, but what is the alternative? How can we blaze a different path?

    In this episode, Tristan Harris and Aza Raskin set out to answer those questions by imagining what a world with humane technology might look like—one where we recognized the harms of social media early and embarked on a whole of society effort to fix them.

    This alternative history serves to show that there are narrow pathways to a better future, if we have the imagination and the courage to make them a reality.

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.

    RECOMMENDED MEDIA

    Dopamine Nation by Anna Lembke

    The Anxious Generation by Jon Haidt

    More information on Donella Meadows

    Further reading on the Kids Online Safety Act

    Further reading on the lawsuit filed by state AGs against Meta

    RECOMMENDED YUA EPISODES

    Future-proofing Democracy In the Age of AI with Audrey Tang

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    AI Is Moving Fast. We Need Laws that Will Too.


    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Mostra di più Mostra meno
    17 min
  • Ask Us Anything 2025
    Oct 23 2025

    It's been another big year in AI. The AI race has accelerated to breakneck speed, with frontier labs pouring hundreds of billions into increasingly powerful models—each one smarter, faster, and more unpredictable than the last. We’re starting to see disruptions in the workforce as human labor is replaced by agents. Millions of people, including vulnerable teenagers, are forming deep emotional bonds with chatbots—with tragic consequences. Meanwhile, tech leaders continue promising a utopian future, even as the race dynamics they've created make that outcome nearly impossible.

    It’s enough to make anyone’s head spin. In this year’s Ask Us Anything, we try to make sense of it all.

    You sent us incredible questions, and we dove deep: Why do tech companies keep racing forward despite the harm? What are the real incentives driving AI development beyond just profit? How do we know AGI isn't already here, just hiding its capabilities? What does a good future with AI actually look like—and what steps do we take today to get there? Tristan and Aza explore these questions and more on this week’s episode.

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.

    RECOMMENDED MEDIA

    The system card for Claude 4.5

    Our statement in support of the AI LEAD Act

    The AI Dilemma

    Tristan’s TED talk on the narrow path to a good AI future

    RECOMMENDED YUA EPISODES

    The Man Who Predicted the Downfall of Thinking

    How OpenAI's ChatGPT Guided a Teen to His Death

    Mustafa Suleyman Says We Need to Contain AI. How Do We Do It?

    War is a Laboratory for AI with Paul Scharre

    No One is Immune to AI Harms with Dr. Joy Buolamwini

    “Rogue AI” Used to be a Science Fiction Trope. Not Anymore.

    Correction: When this episode was recorded, Meta had just released the Vibes app the previous week. Now it’s been out for about a month.


    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Mostra di più Mostra meno
    41 min
  • The Crisis That United Humanity—and Why It Matters for AI
    Sep 11 2025

    In 1985, scientists in Antarctica discovered a hole in the ozone layer that posed a catastrophic threat to life on earth if we didn’t do something about it. Then, something amazing happened: humanity rallied together to solve the problem.

    Just two years later, representatives from all 198 UN member nations came together in Montreal, CA to sign an agreement to phase out the chemicals causing the ozone hole. Thousands of diplomats, scientists, and heads of industry worked hand in hand to make a deal to save our planet. Today, the Montreal protocol represents the greatest achievement in multilateral coordination on a global crisis.

    So how did Montreal happen? And what lessons can we learn from this chapter as we navigate the global crisis of uncontrollable AI? This episode sets out to answer those questions with Susan Solomon. Susan was one of the scientists who assessed the ozone hole in the mid 80s and she watched as the Montreal protocol came together. In 2007, she won the Nobel Peace Prize for her work in combating climate change.

    Susan's 2024 book “Solvable: How We Healed the Earth, and How We Can Do It Again,” explores the playbook for global coordination that has worked for previous planetary crises.

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.

    RECOMMENDED MEDIA

    “Solvable: How We Healed the Earth, and How We Can Do It Again” by Susan Solomon

    The full text of the Montreal Protocol

    The full text of the Kigali Amendment

    RECOMMENDED YUA EPISODES

    Weaponizing Uncertainty: How Tech is Recycling Big Tobacco’s Playbook

    Forever Chemicals, Forever Consequences: What PFAS Teaches Us About AI

    AI Is Moving Fast. We Need Laws that Will Too.

    Big Food, Big Tech and Big AI with Michael Moss

    Corrections:

    Tristan incorrectly stated the number of signatory countries to the protocol as 190. It was actually 198.

    Tristan incorrectly stated the host country of the international dialogues on AI safety as Beijing. They were actually in Shanghai.


    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Mostra di più Mostra meno
    52 min