Authentic Intelligence in Healthcare AI | Context, Explainability, Bias & Human-in-the-Loop Design copertina

Authentic Intelligence in Healthcare AI | Context, Explainability, Bias & Human-in-the-Loop Design

Authentic Intelligence in Healthcare AI | Context, Explainability, Bias & Human-in-the-Loop Design

Ascolta gratuitamente

Vedi i dettagli del titolo

3 mesi a soli 0,99 €/mese

Dopo 3 mesi, 9,99 €/mese. Si applicano termini e condizioni.

A proposito di questo titolo

Send us a text

“Authentic intelligence” is not just smarter AI. It’s AI that behaves closer to human reasoning by understanding context, recognizing limits, and supporting human judgment.

Recorded live at the Data First Conference in Las Vegas, this episode of The Signal Room features Keshavon Shashari, Senior Machine Learning Engineer at Prudential Financial, in a practical conversation about what it takes to design AI systems that can be trusted in high-stakes environments like healthcare.

We explore why context is everything in clinical and administrative workflows, and why general-purpose large language models should not be treated like physicians. Keshavon breaks down four critical categories of healthcare context (patient, task, human availability, and institutional/regulatory requirements), and explains how modern AI systems should include confidence thresholds, risk-aware checkpoints, guardrails, and evaluation frameworks so humans stay in the loop—especially for diagnosis, surgery, and other regulated decisions.

We also dive into explainability and transparency: how logging, tool tracing, and agent-level reasoning can make AI actions auditable, and how feedback loops (including reinforcement learning from human feedback) can reduce bias over time.

If you are building healthcare AI, leading data/AI strategy, or evaluating clinical AI solutions, this episode provides a clear framework for designing systems that are safer, more explainable, and more context-aware.

Key topics covered

  • What “authentic intelligence” means vs artificial intelligence
  • Why context is everything in healthcare AI
  • Four types of context: patient, task, human availability, and institutional/regulatory context
  • Why general-purpose LLMs are not “doctors”
  • Human-in-the-loop design: confidence thresholds and risk-aware deferral
  • Guardrails, eval sets, testing mechanisms, and compliance considerations
  • Explainability and transparency: logging, tool tracing, and agent reasoning
  • Bias, data quality, and reinforcement learning from human feedback
  • How to prevent “technically correct, contextually wrong” outcomes

Chapters

00:00 Authentic intelligence and context awareness
00:45 Live from Data First Conference (Las Vegas)
02:20 What authentic intelligence means in practice
03:30 Four types of context in healthcare AI
06:25 Training, fine-tuning, and context engineering
08:15 Specialty workflows and domain-specific models
09:50 Why AI is not a doctor (yet)
12:00 Confidence scores, risk, and human deferral
15:05 Bias, explainability, and transparency requirements
18:00 Logging, tool tracing, and auditability
20:10 Technically correct but contextually wrong examples
24:20 What builders should focus on now
26:20 Guardrails, evals, and regulated environments
27:10 How to reach Keshavon

Stay tuned. Stay curious. Stay human.

#HealthcareAI #ResponsibleAI #ExplainableAI

Support the show

Ancora nessuna recensione