The Signal Room | AI Strategy, Ethical AI & Regulation copertina

The Signal Room | AI Strategy, Ethical AI & Regulation

The Signal Room | AI Strategy, Ethical AI & Regulation

Di: Chris Hutchins | Healthcare AI Strategy Readiness & Governance
Ascolta gratuitamente

3 mesi a soli 0,99 €/mese

Dopo 3 mesi, 9,99 €/mese. Si applicano termini e condizioni.

A proposito di questo titolo

Healthcare AI leadership, ethics, and LLM strategy—hosted by Chris Hutchins.
The Signal Room explores how healthcare leaders, data executives, and innovators navigate AI readiness, governance, and real-world implementation. Through authentic conversations, the show surfaces the signals that matter at the intersection of healthcare ethics, large language models (LLMs), and executive decision-making.

© 2026 The Signal Room | AI Strategy, Ethical AI & Regulation
Economia
  • Why Healthcare AI Fails Without Complete Medical Records: Interoperability, Transparency & Patient Access
    Jan 21 2026

    Send us a text

    Healthcare AI cannot deliver precision medicine without complete, interoperable medical records, which are essential for responsible AI implementation in healthcare. In this episode, recorded live at the Data First Conference in Las Vegas, Aleida Lanza, founder and CEO of Casedok, shares insights from her 35 years as a medical malpractice paralegal on why fragmented records and inaccessible data continue to undermine care quality, safety, and trust in healthcare AI.

    We dive deep into why interoperability must extend beyond the core clinical record to include the full spectrum of healthcare data—images, itemized bills, claims history, and even records trapped in paper or PDFs. Aleida argues that patient ownership and transparency of their health information, a critical element of healthcare ethics, are key to overcoming these challenges and enabling ethical leadership in healthcare AI.

    This episode also highlights the significant risks posed by missing data bias in healthcare AI, explaining how incomplete records prevent AI systems from accurately detecting patient needs. Aleida outlines how complete medical record transparency and safe AI collaboration can transform healthcare from static averages to truly personalized, informed care, aligning with principles of ethical AI and responsible AI deployment.

    If you're involved in healthcare leadership, AI strategy, data governance, or healthcare ethics, this episode offers valuable perspectives on AI readiness, healthcare AI regulation, and the urgent need to improve interoperability for better patient outcomes.

    Key topics covered

    • Why interoperability must include the entire medical record
    • Patient ownership, transparency, and access to health data
    • The hidden cost of fragmented records and repeated history-taking
    • Why static averages fail patients and clinicians
    • Precision medicine vs static medicine
    • Safe AI deployment without hallucination or data leakage
    • Missing data as the most dangerous bias in healthcare AI
    • Emergency access to complete history as a patient safety issue
    • Medicare, payer integration, and large-scale access challenges

    Chapters

    00:00 Live from Data First Conference
    01:20 Why interoperability is more than clinical data
    03:40 Fragmentation, static medicine, and broken incentives
    05:55 Why AI needs complete patient history
    08:10 Missing data as invisible bias
    10:55 Emergency care and inaccessible records
    12:40 Patient ownership and transparency
    14:30 Precision medicine and AI safety
    16:10 Why patients should own what they paid for
    18:30 How to connect with Aleida Lanza

    Stay tuned. Stay curious. Stay human.

    #HealthcareAI #Interoperability #PatientData

    Support the show

    Mostra di più Mostra meno
    16 min
  • AI Ethics & Ethical Leadership in Healthcare: Building Trust Without Losing Humanity
    Jan 14 2026

    Send us a text

    Recorded live at the Put Data First AI conference in Hollywood, Las Vegas, this episode of The Signal Room features a deep conversation between Chris Hutchins and Asha Mahesh, an expert in AI ethics, ethical leadership, and responsible data use in healthcare. The discussion goes beyond hype to examine what it truly means to humanize AI for care and build trust through ethical leadership and sound AI strategy.

    Asha shares her personal journey into ethics and technology, shaped by lifelong proximity to healthcare and a commitment to ensuring innovation serves patients, clinicians, and communities. Together, they explore how ethical AI in healthcare is not just a policy document, but a way of working embedded into culture, incentives, and daily decision-making.

    Key themes include building trust amid skepticism, addressing fears of job displacement, and reframing AI adoption through a 'what's in it for you' lens. Real-world examples from COVID vaccine development show how AI, guided by purpose and urgency, can accelerate clinical trials without sacrificing responsibility.

    The conversation also discusses human-in-the-loop systems, the irreplaceable roles of empathy and judgment, and the importance of transparency and humility in healthcare leadership. This episode is essential listening for healthcare leaders, life sciences professionals, and AI practitioners navigating the ethical crossroads of trust and innovation.


    Chapters with Keyword-Rich Descriptions

    00:00 – Live from Put Data First: Why AI Ethics Matters in Healthcare
    Chris Hutchins opens the conversation live from the Put Data First AI conference in Las Vegas, framing why ethics, privacy, and trust are amplified challenges in healthcare and life sciences.

    01:05 – Asha’s Path into AI Ethics, Privacy, and Life Sciences
    Asha shares her personal journey into healthcare technology, data, and AI ethics, shaped by early exposure to hospitals, science, and real-world impact.

    03:00 – Human Impact as the North Star for Healthcare AI
    Why improving patient outcomes, not technology novelty, must guide AI strategy, data science, and innovation decisions in healthcare.

    04:30 – Humanizing AI for Care: Purpose Before Technology
    A discussion on what “human-centered AI” really means and how intention and intended use define whether AI helps or harms.

    06:20 – Embedding Ethics into Culture, Not Policy Documents
    Why ethical AI is not a checklist or white paper, but a set of behaviors, incentives, and ways of working embedded into organizational culture.

    07:55 – COVID Vaccine Development: AI Done Right
    A real-world example of how data, machine learning, and predictive models accelerated clinical trials during the pandemic while maintaining responsibility.

    10:15 – Mission Over Technology: Lessons from the Pandemic
    How urgency, shared purpose, and collaboration unlocked innovation faster than tools alone, and why that mindset should not require a crisis.

    12:20 – The Erosion of Trust in Institutions and Technology
    Chris reflects on declining trust in government, healthcare, and technology, and why AI leaders must now operate from a trust deficit.

    14:10 – Fear and AI: Addressing Job Loss Concerns
    A practical conversation on why fear of AI replacing jobs persists and how leaders can reframe AI as support, not replacement.

    16:30 – “What’s In It for You?” A Human-Centered Adoption Framework
    How focusing on individual value, workflow relief, and personal benefit increases trust and adoption of AI tools in healthcare and life sciences.

    18:00 – How Human Should AI Be?

    Support the show

    Mostra di più Mostra meno
    22 min
  • Why Healthcare Isn’t Ready for AI Yet | Emotional Readiness, Just Culture & Leadership Trust
    Jan 7 2026

    Send us a text

    Healthcare can’t be technologically ready for AI until it’s emotionally ready first.

    In this episode of The Signal Room, host Chris Hutchins sits down with Susie Brannigan — a trauma-informed nurse executive, Just Culture leader, and AI ethics advocate — to explore the human readiness gap in healthcare transformation.

    Susie explains why trust must be rebuilt before new systems (Epic, AI, automation) can succeed, and how leaders can shift culture from blame to learning, from burnout to belonging. Drawing from real unit experience and frontline realities, she breaks down what emotionally safe leadership looks like during implementation, why “pilot” language often erodes credibility, and how Just Culture + trauma-informed leadership create the psychological safety required for change.

    We also discuss where AI can genuinely help clinicians (and where it can go too far), including guardrails for empathy, presence, and patient-facing AI interactions. If you’re leading digital transformation, managing workforce fatigue, or trying to implement AI without losing your people, this conversation is a practical guide.

    Key topics covered

    • The human readiness gap: emotional readiness before technological readiness
    • Trust erosion in healthcare leadership and why it blocks adoption
    • Epic implementation lessons: skill gaps, overtime, and unit-level support
    • What Just Culture is and how it reduces fear and turnover
    • Trauma-informed leadership and psychological safety on high-acuity units
    • Emotional intelligence alongside data literacy as a core leadership skill
    • Designing AI with empathy, guardrails, and clinical accountability
    • Practical advice for leaders: rounding with purpose, supporting staff, choosing sub-leaders

    Chapters

    00:00 Emotional readiness and the human readiness gap
    01:10 Why implementations fail without trust
    07:20 Epic vs AI: why this shift feels different
    09:10 What Just Culture is and why it works
    11:20 Trauma-informed leadership and secondary trauma
    19:40 Emotional intelligence in tech-driven environments
    22:10 AI, empathy, and guardrails for patient-facing tools
    29:30 Coaching and simulation: preparing nurses for crisis care
    34:40 Leadership advice for AI-era change
    38:20 How to connect with Susie Brannigan
    42:10 Closing

    Connect with Susie Brannigan

    • LinkedIn: Susie Brannigan
    • Business page: Susie Brannigan Consulting
      (Susie shares culture assessments, Just Culture training, trauma-informed training, and leadership support across healthcare and other industries.)

    If this episode resonated, share it with a leader who’s trying to implement change without losing trust. The future of healthcare transformation depends on psychological safety.

    Stay curious. Stay human.

    #JustCulture #HealthcareLeadership #AIinHealthcare

    Support the show

    Mostra di più Mostra meno
    38 min
Ancora nessuna recensione