Future of Life Institute Podcast copertina

Future of Life Institute Podcast

Future of Life Institute Podcast

Di: Future of Life Institute
Ascolta gratuitamente

A proposito di questo titolo

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.All rights reserved
  • The Case for a Global Ban on Superintelligence (with Andrea Miotti)
    Feb 20 2026

    Andrea Miotti is the founder and CEO of Control AI, a nonprofit. He joins the podcast to discuss efforts to prevent extreme risks from superintelligent AI. The conversation covers industry lobbying, comparisons with tobacco regulation, and why he advocates a global ban on AI systems that can outsmart and overpower humans. We also discuss informing lawmakers and the public, and concrete actions listeners can take.

    LINKS:

    • Control AI

    • Control AI global action page

    • ControlAI's lawmaker contact tools

    • Open roles at ControlAI

    • ControlAI's theory of change

    CHAPTERS:

    (00:00) Episode Preview

    (00:52) Extinction risk and lobbying

    (08:59) Progress toward superintelligence

    (16:26) Building political awareness

    (24:27) Global regulation strategy

    (33:06) Race dynamics and public

    (42:36) Vision and key safeguards

    (51:18) Recursive self-improvement controls

    (58:13) Power concentration and action

    PRODUCED BY:

    https://aipodcast.ing

    SOCIAL LINKS:

    Website: https://podcast.futureoflife.org

    Twitter (FLI): https://x.com/FLI_org

    Twitter (Gus): https://x.com/gusdocker

    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP


    Mostra di più Mostra meno
    1 ora e 7 min
  • Can AI Do Our Alignment Homework? (with Ryan Kidd)
    Feb 6 2026

    Ryan Kidd is a co-executive director at MATS. This episode is a cross-post from "The Cognitive Revolution", hosted by Nathan Labenz. In this conversation, they discuss AGI timelines, model deception risks, and whether safety work can avoid boosting capabilities. Ryan outlines MATS research tracks, key researcher archetypes, hiring needs, and advice for applicants considering a career in AI safety. Learn more about Ryan's work and MATS at: https://matsprogram.org

    CHAPTERS:

    (00:00) Episode Preview

    (00:20) Introductions and AGI timelines

    (10:13) Deception, values, and control

    (23:20) Dual use and alignment

    (32:22) Frontier labs and governance

    (44:12) MATS tracks and mentors

    (58:14) Talent archetypes and demand

    (01:12:30) Applicant profiles and selection

    (01:20:04) Applications, breadth, and growth

    (01:29:44) Careers, resources, and ideas

    (01:45:49) Final thanks and wrap

    PRODUCED BY:

    https://aipodcast.ing

    SOCIAL LINKS:

    Website: https://podcast.futureoflife.org

    Twitter (FLI): https://x.com/FLI_org

    Twitter (Gus): https://x.com/gusdocker

    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP


    Mostra di più Mostra meno
    1 ora e 47 min
  • How to Rebuild the Social Contract After AGI (with Deric Cheng)
    Jan 27 2026

    Deric Cheng is Director of Research at the Windfall Trust. He joins the podcast to discuss how AI could reshape the social contract and global economy. The conversation examines labor displacement, superstar firms, and extreme wealth concentration, and asks how policy can keep workers empowered. We discuss resilient job types, new tax and welfare systems, global coordination, and a long-term vision where economic security is decoupled from work.

    LINKS:

    • Deric Cheng personal website
    • AGI Social Contract project site
    • Guiding society through the AI economic transition

    CHAPTERS:

    (00:00) Episode Preview

    (01:01) Introducing Derek and AGI

    (04:09) Automation, power, and inequality

    (08:55) Inequality, unrest, and time

    (13:46) Bridging futurists and economists

    (20:35) Future of work scenarios

    (27:22) Jobs resisting AI automation

    (36:57) Luxury, land, and inequality

    (43:32) Designing and testing solutions

    (51:23) Taxation in an AI economy

    (59:10) Envisioning a post-AGI society

    PRODUCED BY:

    https://aipodcast.ing

    SOCIAL LINKS:

    Website: https://podcast.futureoflife.org

    Twitter (FLI): https://x.com/FLI_org

    Twitter (Gus): https://x.com/gusdocker

    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

    Mostra di più Mostra meno
    1 ora e 5 min
Ancora nessuna recensione