Disruption Now Episode 190 | What Is Explainable AI?
Impossibile aggiungere al carrello
Rimozione dalla Lista desideri non riuscita.
Non è stato possibile aggiungere il titolo alla Libreria
Non è stato possibile seguire il Podcast
Esecuzione del comando Non seguire più non riuscita
-
Letto da:
-
Di:
A proposito di questo titolo
Dr. Kelly Cohen is a Professor of Aerospace Engineering at the University of Cincinnati and a leading authority in explainable, certifiable AI systems. With more than 31 years of experience in artificial intelligence, his research focuses on fuzzy logic, safety-critical systems, and responsible AI deployment in aerospace and autonomous environments. His lab’s work has received international recognition, with students earning top global research awards and building real-world AI products used in industry.
In this episode 190 of the Disruption Now Podcast, 👩🔬 Dr. Cohen explains:
What explainable AI really means for clinicians
How transparent models improve patient safety
Strategies to reduce algorithmic bias in healthcare systems
Real examples of XAI in diagnostics & treatment
This video is essential for tech leaders, AI researchers, data scientists, clinicians, and anyone interested in ethical, trustworthy AI in medicine.
📅 CHAPTERS / TIMESTAMPS
00:00 Introduction — Why XAI in Healthcare
02:15 Kelly Cohen Bio & Expertise
05:40 What Explainable AI Actually Is
11:20 Challenges in Medical AI Adoption
16:50 Case Study: XAI in Diagnostics
22:10 Reducing Bias in ML Models
28:35 Regulatory & Ethical Standards
33:50 Future of Explainability in Medicine
39:25 Audience Q&A Highlights
44:55 Final Thoughts & Next Steps
💡 Q&A SNIPPET
Q: What is explainable AI?
A: Explainable AI refers to systems where decisions can be understood, traced, and validated — critical for safety-critical applications like aerospace, healthcare, and autonomous vehicles.
Q: Why is black-box AI dangerous?
A: Without transparency, errors cannot be audited, responsibility is unclear, and humans become unknowing test subjects.
Q: What is insurable AI?
A: Insurable AI is AI that has been tested, quantified for risk, and certified to the point where insurers are willing to underwrite it — creating real accountability.
🔗 RESOURCES & HANDLES
Dr. Kelly Cohen LinkedIn:
🔗 https://www.linkedin.com/in/kelly-cohen-phd
Mentioned Concepts:
✔ Explainable AI (XAI)
✔ Model interpretability
✔ Algorithmic bias & fairness
🎧 About This Channel
Disruption Now makes technology accessible and human-centric. Our mission is to demystify complex systems and open conversations across domains that are often hard to grasp — from politics to emerging tech, ethics, civic systems, and more — so every viewer can engage thoughtfully and confidently. We disrupt the status quo.
🔗 Follow & Connect
👤 Rob Richardson
Founder, Strategist, Curator
X (Twitter): https://x.com/RobforOhio
Instagram: https://instagram.com/RobforOhio
Facebook: https://www.facebook.com/robforohio/
LinkedIn: https://www.linkedin.com/in/robrichardsonjr
🌐 Disruption Now
Human-centric tech, culture & conversation
YouTube (Core Channel): https://www.youtube.com/channel/UCWDYBJSzBoqgCd1ADPVttSw
TikTok: https://www.tiktok.com/@disruptionnow
Instagram: https://www.instagram.com/disrupt.art/
X (Twitter): https://twitter.com/DisruptionNow
Clubhouse: https://www.clubhouse.com/@disruptionnow
📅 MidwestCon Week
The Midwest’s human-centered tech & innovation week — Sept 8–11, 2026 in Cincinnati, Ohio 🌆
MidwestCon Week brings together builders, policymakers, creators, and learners for multi-sector conversations on technology, policy, and inclusive innovation.
Official Website: https://midwestcon.live/
Instagram: https://www.instagram.com/midwestcon_/
TikTok: https://www.tiktok.com/@midwestcon
🔔 Subscribe for
Deep, human-centric tech exploration
Conversations breaking down barriers to understanding
Event updates including MidwestCon Week
Culture, policy, and innovation insights
#DisruptionNow #TechForAll #MidwestConWeek
Music Credit: Lofi Music HipHop Chill 2 - DELOSound