Episodi

  • S3E1 - Over the Garden Fence: What Gardening Teaches Us About AI, Expertise and Knowledge Transfer – with Steve Bustin
    Feb 18 2026

    In this episode of Futurise, Rob Price is joined by Steve Bustin, Chair of the Board of Trustees for the Hardy Plant Society, for a conversation that begins in the garden — and ends in the boardroom.

    “Over the garden fence” is how knowledge used to be shared: informally, experientially, and across generations. Gardening expertise — like much business expertise — is rarely written as technical documentation. It is contextual, tacit, and learned through experience.

    As organisations adopt AI and agentic systems, a similar challenge emerges: how do we translate deep domain knowledge into language that AI systems can understand — without losing meaning in the process?

    This episode explores:

    • How expert knowledge is traditionally passed down between generations

    • Why tacit expertise is difficult to encode into AI systems

    • The language gap between business specialists and AI technologists

    • What agentic AI might mean for capturing and applying domain expertise

    • Why successful AI adoption depends as much on terminology as technology

    By deliberately using language that would resonate with gardeners rather than AI engineers, this conversation highlights a wider leadership lesson: AI systems only become valuable when they can engage meaningfully with real-world expertise.

    If you’re a founder, investor, or executive navigating AI adoption, this episode offers a fresh perspective on knowledge transfer, AI leadership, and the future of artificial intelligence in practice.

    Comments are open — where do you see gaps between business expertise and AI terminology in your organisation?

    Subscribe to Futurise to hear first about conversations on Agentic AI, AI leadership, responsible AI development, AI governance, and the future of artificial intelligence.


    This episode is dedicated to Jeune Price (1941-2023), passionate gardener and long time member of the Hardy Plant Society.

    Mostra di più Mostra meno
    30 min
  • S2E21 - Building Safer Agentic AI: AI Safety, Alignment & Governance with Nell Watson
    Jan 14 2026

    Agentic AI is evolving rapidly — moving from copilots and automation tools to autonomous systems that can plan, decide, and act over time. As agentic systems become more capable, questions around AI safety, alignment, and governance become critical for founders, investors, and enterprise leaders.

    In this special episode of Season 2, Rob Price speaks with Nell Watson — AI ethics researcher, author, and Chair of the Safer Agentic AI Safety Experts Focus Group at IEEE — about what building safer agentic AI means in practice.

    The discussion explores:

    • How agentic AI systems are being developed and deployed today

    • Where organisations underestimate AI safety and alignment risks

    • What responsible AI governance looks like for agentic systems

    • How principles such as alignment, epistemic hygiene, and bounded goals translate into real products

    • Why leaders should engage with AI safety before regulation forces the issue

    As the future of AI shifts toward increasingly autonomous and agentic architectures, what does “safe enough” really mean — and who decides?

    If you’re building, funding, or adopting agentic AI, this conversation will help you think more clearly about responsible AI development and long-term trust.

    Subscribe to Futurise for conversations on agentic AI, AI leadership, responsible AI, and the future of artificial intelligence.


    Futurise explores Agentic AI, AI leadership, Responsible AI development, AI governance, and the future of artificial intelligence for founders, investors, and enterprise leaders.



    Mostra di più Mostra meno
    31 min
  • S2E20 - AI Leadership in a Fast-Moving Market: Acting Safely Without Waiting for Certainty – with Michael Wade
    Dec 16 2025

    In this episode of Futurise, Rob Price speaks with Professor Michael Wade about how leaders can take meaningful, responsible action on AI without waiting for certainty — and without exposing their organisations to unnecessary risk.

    As AI technology evolves rapidly — from generative AI to agentic systems — many organisations struggle with how to develop an effective AI strategy while markets are still shifting. The conversation explores what responsible AI leadership looks like in a fast-moving environment, and why deliberate action is often safer than paralysis.

    In this episode, we discuss:

    • Why traditional AI strategy frameworks break down in rapidly evolving markets

    • How leaders can act on AI without locking themselves into the wrong decisions

    • The difference between safe progress and reckless acceleration

    • What “beyond agentic AI” might mean for enterprise organisations

    • How to assess AI market risk as conditions change

    Professor Wade advises senior leaders on digital transformation and AI as Professor of Strategy and Digital at IMD Business School, helping organisations balance speed, responsibility, and long-term value creation.

    If you’re a founder, investor, or executive navigating AI adoption, this episode offers a practical lens on AI governance, AI risk management, and leadership in the future of artificial intelligence.

    Subscribe to Futurise for conversations on Agentic AI, AI leadership, responsible AI development, and the future of AI.


    Futurise explores Agentic AI, AI leadership, Responsible AI development, AI governance, and the future of artificial intelligence for founders, investors, and enterprise leaders.

    Mostra di più Mostra meno
    33 min
  • S2E19 - Sovereign AI and Digital Sovereignty in the UK: Michael Herron, CEO of Atos UK&I
    Dec 10 2025

    In Episode 19 of Season 2, Rob Price speaks with Michael Herron, CEO of Atos UK & Ireland, about sovereign AI, digital sovereignty, and what these shifts mean for the UK’s AI infrastructure and enterprise strategy.

    As governments and large organisations rethink control over data, compute, and AI systems, sovereign AI is becoming a strategic priority. Michael discusses Atos’ recent investments in a Sovereign Orchestration Hub, a Digital Agentic Centre, and a Sovereign Digital Enablement Centre — initiatives aligned with UK Government commitments around AI Growth Zones and national AI capability.

    The conversation explores:

    • What sovereign AI means in practice for enterprises and public sector organisations

    • Why digital sovereignty is rising on the executive agenda

    • How agentic AI systems fit into sovereign AI infrastructure

    • The strategic implications of AI Growth Zones in the UK

    • How capability development and future careers must evolve in an agentic AI economy

    As AI adoption moves from experimentation to infrastructure-level deployment, questions of resilience, governance, and sovereignty are becoming as important as speed and innovation.

    If you’re a leader navigating enterprise AI adoption, AI governance, or national AI strategy, this episode offers insight into how sovereign AI may reshape the future of artificial intelligence in the UK and beyond.

    Relevant links:
    https://atos.net/en-gb/united-kingdom
    https://atos.net/en-gb/lp/uki-digital-sovereignty
    https://atos.net/en-gb/2025/press-releases-en-gb_2025_10_20/atos-to-launch-new-sovereign-and-sovereign-ai-centres-across-the-uk

    Comments are open — how is your organisation approaching sovereign AI and digital sovereignty?

    Subscribe to Futurise to hear first about conversations on Agentic AI, AI leadership, responsible AI development, AI governance, and the future of artificial intelligence.


    Mostra di più Mostra meno
    31 min
  • Do Teams Still Work the Same in an Agentic AI World?
    Nov 26 2025

    #AgenticAI isn’t just changing tools — it’s changing how work gets done.


    In this episode of Futurise, Rob Price explores whether existing team design models still hold in an agentic world, and what leaders need to change as AI agents become part of everyday delivery.


    Featuring Matthew Skelton, co-author of Team Topologies and CEO/CTO of Conflux.


    • Do Team Topologies principles still apply with AI agents in the workflow?

    • What breaks first when teams stay “human-only” in design

    • How to think about accountability, flow, and boundaries in agentic teams

    • What organisations should start changing now, not later.

    Matthew Skelton is co-author of the award-winning Team Topologies, founder and CEO/CTO of Conflux, and a leading voice in modern organisational design.

    • Matthew Skelton: matthewskelton.com

    • Conflux: confluxhq.com/outcomes

    • Team Topologies: teamtopologies.com/scale

    Comments are open — how do you think teams should evolve in an agentic world?

    If AI agents are doing more of the work, what should teams actually be responsible for? Interested to hear how others are thinking about this.

    Mostra di più Mostra meno
    35 min
  • AI Is Silencing Languages — Here’s How That Changes Everything
    Nov 12 2025

    AI systems are being built for a narrow slice of humanity.
    In this episode of Futurise, Rob Price explores why endangered language models matter—not just for inclusion, but for the future of agentic AI itself.
    Featuring Anna Mae Yu Lamentillo, founder and Chief Futures Officer of NightOwl AI.

    In this conversation, we discuss:

    • Why most AI models exclude entire cultures and languages

    • What endangered language models really are

    • The implications for accessibility, bias, and agentic systems

    • Lessons for organisations building bespoke and domain-specific AI models.

    Anna Mae Yu Lamentillo is the founder of NightOwl AI, a mission-driven company using AI to preserve endangered languages and combat digital exclusion. Her work focuses on ensuring AI reflects the full spectrum of human culture—not just the dominant few.

    Anna Mae Lamentillo: https://www.annamaeyulamentillo.com
    NightOwl AI: https://www.thenightowl.ai

    Comments are open — should AI systems be required to support minority and endangered languages?

    If AI agents can’t understand large parts of the world’s population, are they really fit for purpose? Curious how others are thinking about inclusion in agentic systems.

    Please subscribe to Futurise to hear first about future episodes.


    Mostra di più Mostra meno
    24 min
  • Husayn Kassai, CEO Quench.ai - From Identity to Agentic - S2E16
    Nov 5 2025

    In the 16th episode of Season 2 on #AgenticAI, Rob is joined by⁠ ⁠⁠⁠Husayn Kassai based out of London in the UK. Husayn is the founder and CEO at Quench.ai - the AI layer unifying how mid-market companies discover, deploy, and benchmark AI that works - all in one place. He’s the co-founder and former CEO of Onfido - the largest UK AI exit. A WEF Tech Pioneer, Forbes contributor, and “30 Under 30,” He holds a BA in Economics and Management from Keble College, Oxford.

    On the podcast Rob and Husayn talk about their experiences at the heart of Agentic AI, its adoption and some of the challenges that they are seeing. Husayn also covers lessons learned from Onfido, and his current work with the London AI Hub at the heart of the startup ecosystems.

    ⁠⁠⁠⁠⁠⁠⁠⁠⁠

    Comments are open for questions or observations.

    Please subscribe to the podcast to hear first about future episodes.

    ----------------------------------------------------------------

    #Futurise is a podcast produced by ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Futuria.ai⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, an Agentic AI platform that helps organisations adopt AgenticAI safely to re-imagine their operating model with multi-agent high performing teams. You can message us at ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Futurise@futuria.ai⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠.

    Rob is also co-founder of ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Digital Responsibility⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ and hosted "A New Digital Responsibility" podcast.

    Mostra di più Mostra meno
    31 min
  • S2E15 - The Future of Consulting in an Agentic Age
    Oct 15 2025

    Welcome to the fifteenth episode of Season 2 of Futurise.

    Season 2 will focus in on AgenticAI, speaking to founders, funders and early adopters of the technology to share their insight and lessons learned.

    This Special Episode is co-hosted on FUTURISE and on THE CONSULTING GROWTH PODCAST, and released simultaneously on both podcasts.

    In this episode, Rob is joined by⁠ ⁠⁠Professor Joe O'Mahoney⁠⁠⁠, based out of Cardiff University in Wales. Prof. Joe O’Mahoney helps owners of boutique consultancies 4x their equity value over a period of 2–3 years. He is a Professor of Consulting at Cardiff University, co-founder of Equity Sherpa, and an award-winning author and educator.

    On the podcast Rob and Joe talk together about their respective views on the impact of Agentic AI and Large Language Models on the world of Consulting and Professional Services. Rob was previously Managing Partner in a Consulting business, and represented the MCA in their Year of Digital as one of their Digital gurus. Joe is a recognised leading authority on the advisory industry and a strategic advisor to boutique consultancies.


    Links:

    Website: ⁠www.equitysherpa.com⁠

    LinkedIn: ⁠https://www.linkedin.com/in/joeomahoney/⁠

    Podcast: ⁠https://podcasts.apple.com/gb/podcast/consulting-growth/id1682304671⁠

    ⁠⁠⁠⁠⁠⁠⁠

    ----------------------------------------------------------------

    #Futurise is a podcast produced by ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Futuria.ai⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, a product business that helps organisations adopt AgenticAI safely to re-imagine their operating model with multi-agent high performing teams. You can message us at ⁠⁠⁠⁠⁠⁠⁠⁠Futurise@futuria.ai⁠⁠⁠⁠⁠⁠⁠⁠.

    Rob is also co-founder of ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Digital Responsibility⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ and hosted "A New Digital Responsibility" podcast.

    Mostra di più Mostra meno
    1 ora e 9 min