S2E21 - Building Safer Agentic AI: AI Safety, Alignment & Governance with Nell Watson
Impossibile aggiungere al carrello
Rimozione dalla Lista desideri non riuscita.
Non è stato possibile aggiungere il titolo alla Libreria
Non è stato possibile seguire il Podcast
Esecuzione del comando Non seguire più non riuscita
-
Letto da:
-
Di:
A proposito di questo titolo
Agentic AI is evolving rapidly — moving from copilots and automation tools to autonomous systems that can plan, decide, and act over time. As agentic systems become more capable, questions around AI safety, alignment, and governance become critical for founders, investors, and enterprise leaders.
In this special episode of Season 2, Rob Price speaks with Nell Watson — AI ethics researcher, author, and Chair of the Safer Agentic AI Safety Experts Focus Group at IEEE — about what building safer agentic AI means in practice.
The discussion explores:
How agentic AI systems are being developed and deployed today
Where organisations underestimate AI safety and alignment risks
What responsible AI governance looks like for agentic systems
How principles such as alignment, epistemic hygiene, and bounded goals translate into real products
Why leaders should engage with AI safety before regulation forces the issue
As the future of AI shifts toward increasingly autonomous and agentic architectures, what does “safe enough” really mean — and who decides?
If you’re building, funding, or adopting agentic AI, this conversation will help you think more clearly about responsible AI development and long-term trust.
Subscribe to Futurise for conversations on agentic AI, AI leadership, responsible AI, and the future of artificial intelligence.
Futurise explores Agentic AI, AI leadership, Responsible AI development, AI governance, and the future of artificial intelligence for founders, investors, and enterprise leaders.