The Four Pillars of Trustworthy AI—and Who Owns Them copertina

The Four Pillars of Trustworthy AI—and Who Owns Them

The Four Pillars of Trustworthy AI—and Who Owns Them

Ascolta gratuitamente

Vedi i dettagli del titolo

A proposito di questo titolo

Trust in AI isn’t a vibe—it’s something you can intentionally design for (or accidentally break). In this episode, Galen sits down with Cal Al-Dhubaib to unpack “trust engineering”: a shared toolkit that helps cross-functional teams (engineering, UX, governance, risk, and business) talk about the same trust risks in the same language. They get into why “boring AI is safe AI,” how guardrails and human handoffs actually preserve trust, and why the biggest failures often aren’t the model—they’re the systems (and incentives) wrapped around it.

You’ll also hear real-world examples of trust going sideways—from biased outcomes to hallucinated “gaslighting,” to AI-assisted deliverables causing accuracy issues—and what project leaders can do to prevent finger-pointing when it happens.

Resources from this episode:

  • Join the Digital Project Manager Community
  • Subscribe to the newsletter to get our latest articles and podcasts
  • Connect with Cal on LinkedIn
  • Check out Further
  • AI Incident Database
Ancora nessuna recensione