S2E21 - Building Safer Agentic AI: AI Safety, Alignment & Governance with Nell Watson copertina

S2E21 - Building Safer Agentic AI: AI Safety, Alignment & Governance with Nell Watson

S2E21 - Building Safer Agentic AI: AI Safety, Alignment & Governance with Nell Watson

Ascolta gratuitamente

Vedi i dettagli del titolo

A proposito di questo titolo

Agentic AI is evolving rapidly — moving from copilots and automation tools to autonomous systems that can plan, decide, and act over time. As agentic systems become more capable, questions around AI safety, alignment, and governance become critical for founders, investors, and enterprise leaders.

In this special episode of Season 2, Rob Price speaks with Nell Watson — AI ethics researcher, author, and Chair of the Safer Agentic AI Safety Experts Focus Group at IEEE — about what building safer agentic AI means in practice.

The discussion explores:

  • How agentic AI systems are being developed and deployed today

  • Where organisations underestimate AI safety and alignment risks

  • What responsible AI governance looks like for agentic systems

  • How principles such as alignment, epistemic hygiene, and bounded goals translate into real products

  • Why leaders should engage with AI safety before regulation forces the issue

As the future of AI shifts toward increasingly autonomous and agentic architectures, what does “safe enough” really mean — and who decides?

If you’re building, funding, or adopting agentic AI, this conversation will help you think more clearly about responsible AI development and long-term trust.

Subscribe to Futurise for conversations on agentic AI, AI leadership, responsible AI, and the future of artificial intelligence.


Futurise explores Agentic AI, AI leadership, Responsible AI development, AI governance, and the future of artificial intelligence for founders, investors, and enterprise leaders.



Ancora nessuna recensione