• Parenting Through the AI Era: What Every Parent Needs to Know with Dr. Amber Childs
    May 5 2026

    In this episode of Beyond the Couch, Dr. Ernest Wayde sits down with Dr. Amber Childs, child and adolescent psychiatrist, Yale School of Medicine associate professor, and founder of Dr. Amber Childs Advisory. The conversation explores how artificial intelligence is reshaping the lives of teens, parents, clinicians, and the future of mental health care.

    Dr. Childs shares how her unexpected journey into AI began during the COVID-19 pandemic, when she rapidly helped scale telehealth services for adolescent psychiatry at Yale. She discusses how teens are already integrating AI into their daily lives for learning, emotional support, curiosity, and mental health conversations, often turning to chatbots when trusted human support feels unavailable.

    The discussion also highlights the fears many parents experience around AI, the importance of curiosity-driven conversations instead of fear-based reactions, and why bans alone may fail to protect young people. Dr. Childs emphasizes that clinicians, caregivers, and psychologists must stay engaged with technology, develop AI literacy, and help shape safer, evidence-based solutions that support human connection rather than replace it.

    Takeaways:

    • AI Is Already Deeply Integrated Into Teen Life and Mental Health Conversations.
    • Teens Often Use AI for Exploration, Emotional Support, and Nonjudgmental Guidance.
    • Parents Should Approach AI Conversations With Curiosity Instead of Fear or Control.
    • Banning AI Without Education or Safeguards May Create More Problems Than Solutions.
    • Psychologists and Clinicians Must Help Shape the Future of Ethical AI in Mental Health Care.
    • Human Connection, Communication, and Trust Still Matter More Than Technology.
    • AI Literacy Is Becoming Essential for Parents, Therapists, and Educators.

    Connect with Dr. Amber Childs:

    LinkedIn: https://www.linkedin.com/in/amberwchilds/

    Website: https://www.dramberchilds.com/

    Connect With Us

    https://www.waydeai.com/

    https://www.facebook.com/waydeai

    https://www.linkedin.com/company/wayde-ai/

    info@waydeai.com

    Subscribe

    https://the-waydeai-brief.beehiiv.com/

    Chapters:

    00:00 - Intro

    02:37 - How AI Entered Her Work “By Accident” During the Pandemic

    06:55 - Teen Skepticism, AI Anxiety & Concerns About Relationships

    08:51 - What Parents Are Most Worried About With AI

    11:30 - Why Teens Turn to AI for Support & Validation

    13:38 - Why Attacking AI or “The Friend” Backfires With Teens

    16:00 - Dangerous AI Scenarios Parents Should Watch For

    19:52 - Trusted Resources for Parents Navigating AI & Mental Health

    22:53 - How Parents Can Start the AI Conversation With Their Teens

    25:08 - Why AI Bans May Do More Harm Than Good

    29:43 - Where to Follow Dr. Amber Childs Online

    30:28 - Final Advice: “Curiosity Is Free”

    Mostra di più Mostra meno
    31 min
  • How AI Is Reshaping PTSD Therapy and Clinician Training with Dr. Philip Held
    Apr 28 2026

    In this episode of Beyond the Couch, Dr. Ernest Wayde interviews Dr. Philip Held, a clinical psychologist and researcher focused on improving PTSD treatment outcomes through AI and accelerated therapy models. The conversation explores how Dr. Held’s team developed “Socrates 2.0,” a multi-agent AI system designed to support cognitive restructuring through Socratic dialogue alongside evidence-based therapy.

    Dr. Held explains how the system uses multiple AI agents to supervise and improve therapeutic conversations in real time, reducing looping behaviors and improving the quality of AI-assisted interactions. The discussion highlights how veterans are using AI as a practice space before therapy sessions, how clinicians are beginning to use these tools for supervision and training, and why validation, safety testing, and clear guardrails are critical as AI becomes more integrated into mental health care.

    The episode also explores the future of AI-assisted clinician training, ethical considerations around validation standards, and why curiosity and responsible experimentation are essential as psychology adapts to rapidly advancing technologies.

    Takeaways:

    • Multi-Agent AI Can Improve the Quality of Therapeutic Conversations.
    • AI Tools Can Help Veterans Practice Difficult Conversations Before Therapy Sessions.
    • Validation, Safety Testing, and Guardrails Are Essential for Mental Health AI Tools.
    • AI Is Best Used as a Support Tool Rather Than a Replacement for Clinicians.
    • Clinicians Are Beginning to Use AI for Supervision, Roleplay, and Skill Development.

    Connect with Dr. Philip Held

    LinkedIn: https://www.linkedin.com/in/philip-held-phd/

    Website: https://roadhomeprogram.org/

    Connect With Us

    https://www.waydeai.com/

    https://www.facebook.com/waydeai

    https://www.linkedin.com/company/wayde-ai/

    info@waydeai.com

    Subcribe

    https://the-waydeai-brief.beehiiv.com/

    Chapters:

    00:00 - Intro

    02:24 - Dr. Philip Held’s journey into AI and psychology

    05:58 - How Socratic dialogue works inside the AI tool

    08:24 - Multi-agent AI supervision inspired by clinical training

    10:05 - What success looks like for Socrates 2.0

    11:30 - The challenge of measuring “good enough” in AI therapy

    14:48 - How AI is changing traditional therapy methods

    17:00 - How veterans responded to using the AI tool

    19:33 - Why validating AI mental health tools matters

    23:02 - What responsibilities still belong to clinicians

    25:40 - Clinicians’ reactions to AI-assisted therapy tools

    27:33 - Future AI applications for clinician training and supervision

    30:28 - The need for AI benchmarks, boundaries, and guardrails

    34:36 - What “validation” really means in AI mental health

    35:45 - Dr. Philip Held’s advice on staying curious about AI

    Mostra di più Mostra meno
    37 min
  • How AI Is Changing Human Relationships and Mental Health with Dr. Rachel Wood
    Apr 21 2026

    In this episode of Beyond the Couch, Dr. Ernest Wayde interviews Dr. Rachel Wood, a cyber psychology researcher, licensed professional counselor, and founder of the AI Mental Health Collective. The discussion explores how artificial intelligence is shifting the relational bedrock of society, noting that clients are increasingly bringing AI into their therapy sessions for advice, comfort, and validation.

    Dr. Wood emphasizes that as AI usage becomes more common, therapists should prioritize clinician competence and practice informed consent. She advocates for a cross-disciplinary approach, urging mental health practitioners to collaborate with AI builders to establish safeguards, raise user awareness, and ensure the responsible development of these technologies.

    Takeaways:

    • AI Is Shifting Client Expectations and Relational Dynamics in Therapy.
    • Clinicians Must Prioritize Informed Consent and Their Own AI Competence.
    • Clients Often Turn to Chatbots Seeking Validation and Frictionless Interactions.
    • Clinical Judgment and Patient Safety Must Always Supersede Any AI Usage.
    • Mental Health Professionals Must Claim a Voice at the Table During AI Development.

    Connect with Dr. Rachel Wood

    LinkedIn: https://www.linkedin.com/in/rachelwoodphd/

    Website: https://www.dr-rachelwood.com/

    Website: https://www.aimentalhealthcollective.com/

    Connect With Us

    https://www.waydeai.com/

    https://www.facebook.com/waydeai

    https://www.linkedin.com/company/wayde-ai/

    info@waydeai.com

    Subcribe

    https://the-waydeai-brief.beehiiv.com/

    Chapters:

    00:00 Intro

    00:31 Welcome and Guest Intro

    02:04 Dr. Rachel Wood Origin Story

    04:01 How AI Impacts Therapy and Client Usage

    05:36 Clinician Competence and Informed Consent

    07:10 Shifting Expectations and AI Triangulation

    09:58 What Clients Get from AI Chatbots

    11:24 Clinical Judgment and Attachment Theory

    15:24 Practitioner Boundaries and Accountability

    18:21 The AI Mental Health Collective

    21:20 Responsible AI Integration

    23:07 Closing Advice and Where to Find Her

    Mostra di più Mostra meno
    25 min
  • Defining the Boundaries of AI in Mental Health with Dr. Shannon Wiltsey Stirman
    Apr 7 2026

    In this episode of Beyond the Couch, Dr. Ernest Wayde interviews Dr. Shannon Wiltsey Stirman, a professor of Psychiatry and Behavioral Sciences at Stanford and co-director of the Center for Responsible and Effective AI Technology Enhancement for PTSD treatment (CREATE), discusses how large language models can support evidence-based mental health interventions, and can also be used to assist in training therapists through the use of simulated patients.

    Dr. Wiltsey Stirman notes that while AI can be a powerful tool for tasks like clinical scribing and reflection, it should supplement rather than replace human therapists, especially regarding complex diagnoses and high-risk scenarios. She highlights the necessity of AI literacy, urging therapists and organizations to prioritize transparency, privacy, and responsible implementation.

    Takeaways:

    • AI Should Supplement, Not Replace Human Therapists
    • Simulated Patients Offer Safe Practice for Clinicians
    • AI Diagnostics and High-Risk Treatment Require Firm Boundaries
    • Organizations Must Prioritize Transparency and Privacy
    • Therapists Need to Increase Their AI Literacy

    Connect with Dr. Shannon Wiltsey Stirman

    Email: sws1@stanford.edu

    LinkedIn: https://www.linkedin.com/in/shannon-wiltsey-stirman-3874056/

    https://med.stanford.edu/fastlab.html

    https://create.stanford.edu/contact

    https://create.stanford.edu/contact

    Connect With Us

    https://www.waydeai.com/

    https://www.facebook.com/waydeai

    https://www.linkedin.com/company/wayde-ai/

    info@waydeai.com

    Subcribe

    https://the-waydeai-brief.beehiiv.com/

    Chapters:

    00:00 Intro

    00:35 Welcome and Guest Intro

    02:15 Dr. Shannon Wiltsey Stirman Origin Story

    04:54 Realistic AI Capabilities in Therapy Today

    07:00 Meaningful AI Implementation in Evidence-Based Care

    09:34 What People Get Wrong About AI Tools

    12:57 Boundaries Between AI and Human Therapists

    15:46 Safe AI Boundaries for Therapists

    19:01 Organizational Implementation and Transparency

    21:27 The CREATE Center at Stanford

    25:06 Closing Advice and Where to Find Her

    Mostra di più Mostra meno
    26 min
  • Understanding How Large Language Models (LLMs) Work with Dr. Ernest Wayde
    Mar 17 2026

    In this episode of Beyond the Couch, Dr. Ernest Wayde reveals the real mechanics behind large language models like ChatGPT, demystifying what happens inside these systems when you use them. Whether you're a psychologist or healthcare professional, understanding this process is crucial for responsible use and interpreting AI-generated information accurately.

    Takeaways:

    1. Fluency does not equal accuracy.
    2. AI operates through pattern matching, not reasoning.
    3. Training data bakes in human bias.
    4. The "fine-tuning" stage reflects human values.
    5. Context is limited to the current prompt.
    6. Accountability remains with the professional.

    Connect With Us

    https://www.waydeai.com/

    https://www.facebook.com/waydeai

    https://www.linkedin.com/company/wayde-ai/

    info@waydeai.com

    Subcribe

    https://the-waydeai-brief.beehiiv.com/

    Mostra di più Mostra meno
    23 min
  • AI Ethics, Responsibility, and the Role of Humans in the Age of AI with Dr. Joanna Bryson
    Mar 10 2026

    In this episode of Beyond the Couch, Dr. Ernest Wayde interviews Dr. Joanna Bryson, professor of Ethics and Technology in Berlin and advisor to organizations including the UN and EU, about what “AI ethics” really means. Dr. Bryson argues it’s not coherent to call AI itself ethical. She argues that the primary concern should be whether and how humans should build and deploy AI and how it may change societies. Dr. Bryson highlights recurring concerns like bias, but stresses broader failures around accountability, surveillance, deception, and weaponization, urging users to maintain agency, verify outputs, protect data, and avoid trusting AI.

    Takeaways:

    1. AI Itself Is Not Ethical—Humans Are Responsible
    2. Bias Is a Major Concern—but Not the Only One
    3. Accountability Must Start With Development
    4. The Information Age Demands Critical Thinking
    5. Learning and Adaptation Are Essential

    Connect with Dr. Joanna Bryson

    bryson@hertie-school.org

    https://www.hertie-school.org/en/who-we-are/profile/person/bryson

    Connect With Us

    https://www.waydeai.com/

    https://www.facebook.com/waydeai

    https://www.linkedin.com/company/wayde-ai/

    info@waydeai.com

    Subcribe

    https://the-waydeai-brief.beehiiv.com/

    Chapters:

    00:00 What Is AI Ethics

    00:23 Welcome and Guest Intro

    02:06 Dr. Joanna Bryson Origin Story

    05:38 From AI Research to Ethics

    07:08 Ethical AI Misconceptions

    09:02 Policy Failures and Liability

    10:57 Beyond Bias Surveillance Risks

    12:30 Everyday User Responsibility

    15:23 AI and Mental Health Use

    16:46 EU Rules and Bot Disclosure

    17:51 Scams Surveillance and Freedom

    20:30 Closing Advice and Where to Find Her

    Mostra di più Mostra meno
    22 min
  • AI and Teens: Navigating Mental Health in the Digital Age with Dr. Caroline Figueroa
    Mar 3 2026

    In this episode of Beyond the Couch, Dr. Ernest Wayde engages with Dr. Caroline Figueroa, who discusses her extensive background in mental health, neuroscience, and AI. They explore how AI tools are being utilized by youth for emotional support, the implications for psychologists, and the importance of involving young people in the design and regulation of these technologies. The conversation highlights the need for responsible AI frameworks, discussing AI use with young people, and the challenges faced by mental health professionals.

    Takeaways:

    1. Many young people find AI chatbots helpful.
    2. Psychologists should ask about AI use in therapy.
    3. Some young people recognize AI's limitations.
    4. Involving youth in AI design is crucial.
    5. Banning AI for youth may not be effective.
    6. AI can support but shouldn't replace human interaction.

    Connect with Dr. Caroline Figueroa

    https://www.linkedin.com/in/caroline-figueroa-md-phd-85a11485/

    https://www.commonwealthfund.org/person/caroline-figueroa

    https://www.risedigitalhealth.eu/

    Connect With Us

    https://www.waydeai.com/

    https://www.facebook.com/waydeai

    https://www.linkedin.com/company/wayde-ai/

    info@waydeai.com

    Subcribe

    https://the-waydeai-brief.beehiiv.com/

    Mostra di più Mostra meno
    31 min
  • Wellness AI for College Student Wellbeing with Dr. Ashleigh Golden
    Feb 24 2026

    In this episode of Beyond the Couch, host Dr. Ernest Wayde sits down with Dr. Ashleigh Golden, a Stanford trained clinical psychologist and co-founder of WayHaven, to explore the transformative role of conversational AI in student wellness.

    Dr. Golden and Dr. Wayde discuss the upstream model of care: using AI not as a replacement for therapy, but as a proactive tool to help students build social emotional skills and navigate campus resources before they reach a clinical crisis. Dr. Golden emphasizes that while technology is evolving rapidly, the clinician must remain in the driver’s seat, using these tools to supplement evidence-based treatment and bridge the action implementation gap between sessions.

    Takeaways


    1. Wayhaven serves as a well-being coach for college students, addressing everyday challenges.
    2. AI tools like Wayhaven are not substitutes for clinical services but provide proactive support.
    3. Transparency about AI's capabilities and limitations is crucial for users.
    4. Clinicians must remain involved in the development of AI tools to ensure ethical use.
    5. Banning AI in mental health is not the solution; better safeguards are needed.
    6. Understanding the risks associated with AI usage is essential for clinicians.
    7. Collaboration between clinicians and AI developers can enhance mental health support.



    Connect With Dr. Ashleigh Golden

    https://www.linkedin.com/in/ashleigh-golden/

    https://www.wayhaven.com/


    Connect With Us

    https://www.waydeai.com/

    https://www.facebook.com/waydeai

    https://www.linkedin.com/company/wayde-ai/

    info@waydeai.com



    Subcribe

    https://the-waydeai-brief.beehiiv.com/

    Mostra di più Mostra meno
    32 min