Episodi

  • The Better AI Gets, the Further We Seem from AGI
    Jan 22 2026

    Let's take a grounded look at the state of the AI industry in early 2026 and ask whether we’re actually any closer to artificial general intelligence (AGI) or superintelligence than we were a few years ago. Despite massive valuations for companies like OpenAI and bold promises from AI lab leaders, today’s systems still struggle with hallucinations, common sense, and a genuine understanding of the world.

    So join me as I revisit core assumptions behind current AI approaches—especially the ideas that the mind is computable and that scaling up large language models is enough to “solve” intelligence, and why many researchers are now pivoting from the “age of scaling” to an “age of research” into the nature of intelligence itself.

    What happens to AI company valuations if superintelligence remains out of reach for the foreseeable future?

    And how should we rethink intelligence beyond language, code, and computation?

    BREAKING: Demis Hassabis of Google Deepmind now agrees that LLMs are a dead-end on the road to AGI

    Substack version of this episode

    My 2024 deep dive into the impediments to AGI

    What non-ordinary states of consciousness tell us about intelligence

    Ilya Sutskever on Dwarkesh

    The LLM memorization crisis

    On the Tenuous Relationship between Language and Intelligence

    Gary Marcus on The Real Eisman (of Big Short fame)

    Fei-Fei Li’s World Labs: https://www.worldlabs.ai/blog

    Support the show

    Join Chad's newsletter to learn about all new offerings, courses, trainings, and retreats.

    Finally, you can support the podcast here.

    Mostra di più Mostra meno
    25 min
  • AI Empires and the Soul Sickness of Silicon Valley
    Nov 12 2025

    Why is Mark Zuckerberg so obsessed with the Roman Empire?

    Why is Peter Thiel so obsessed with the Antichrist?

    Why does Sam Altman want $7 trillion dollars for an uncharted course to an unknown destination?

    And how can be build AI models and an AI company based on principles of kinship and reciprocity?

    In my ongoing exploration of what's rotten in Silicon Valley tech innovation and the ideologies driving it, last time I talked about the pandemic of what I called the Apollonian Mind Virus, a tendency throughout the modern world to favor cold, disembodied hyperrationality and cognitive intelligence over all other ways of knowing and being, over intuition, emotional intelligence, embodiment, and wisdom. This Apollonian orientation toward nature has ushered in what Sam Altman has dubbed the Intelligence Age. "Apollonian" describes the worldview and theories of intelligence behind current approaches to AI.

    Today, inspired by Native American thinkers like Jack D. Forbes and Robin Wall Kimmerer, as well as Karen Hao’s work, I explore an adjacent pandemic of the mind or sickness of spirit that is insatiable, extractive, and exploitative: The hungry ghost of the Windigo. I argue that current approaches to AI are largely the result of these two features of the modern, Western ethos—the interwoven helix of the Apollonian Mind and the Windigo soul sickness. I then close this post by beginning to explore whether there is a better way to innovate and evolve, collectively.

    Substack version here.

    Support the show

    Join Chad's newsletter to learn about all new offerings, courses, trainings, and retreats.

    Finally, you can support the podcast here.

    Mostra di più Mostra meno
    33 min
  • AI, Magical Thinking, and the Machine Metaphor with Matt Segall
    Sep 29 2025

    In this conversation, philosopher Matt Segall and I address the role of philosophy in contemporary culture, emphasizing the need for discernment amidst ideological ferment. We critique transhumanist Silicon Valley ideologies, highlighting their left-hemisphere bias and magical thinking. We also discusses the implications of AI, arguing that while it's a valuable tool, the real danger lies in the extractive, capitalist machine driving its development.

    More about Matt, including his Substack, his YouTube channel, and his conversation with Michael Levin.

    God Human Animal Machine by Meghan O’Gieblyn https://amzn.to/3IzujNq

    Against the Machine, by Paul Kingsnorth https://amzn.to/4ntYp4b

    ‘Other,’ by R. S. Thomas

    The machine appeared

    In the distance, singing to itself

    Of money. Its song was the web

    They were caught in, men and women

    Together. The villages were as flies

    To be sucked empty.

    God secreted

    A tear. Enough, enough,

    He commanded, but the machine

    Looked at him and went on singing.

    Support the show

    Join Chad's newsletter to learn about all new offerings, courses, trainings, and retreats.

    Finally, you can support the podcast here.

    Mostra di più Mostra meno
    1 ora e 7 min
  • Apollonian Intelligence: Why Do Tech Bros Have the Worst Ideas?
    Sep 16 2025

    Peter Thiel is speaking this month in San Francisco about the antichrist. Nicole Shanahan, ex-wife of Google co-founder Sergey Brin and former vice-presidential running mate of Robert F. Kennedy Jr., called the annual Burning Man festival "demonic" last week. These are the most recent developments in the rise of techno-Christianity, a reaction in part to transhumanism and Effective Altruism.

    Peter Thiel has also questioned the viability of democracy, which brings us to the "Dark Enlightenment" of Curtis Yarvin and Nick Land that advocates for anti-democratic CEO-kings. Although it was a fringe idea for many years, it has now gained traction with Silicon Valley billionaires like Balaji Srinivasan, who advocate for the "Exit," seasteading, and "network states."

    It feels like we live in the strangest times, shaped by powerful people with the worst ideas. Because technology reflects the consciousness of the people creating it, I am deeply concerned about the people creating our technologies, especially artificial intelligence.

    In my new podcast series, I begin trying to understand the philosophical and psychological underpinnings behind the strange ideologies coming out of Silicon Valley, ideologies very much shaping technology innovation today. Inspired by Nietzsche and Iain McGilchrist, I'm calling the imbalanced thinking behind all of this that emphasizes left hemisphere qualities “Apollonian Intelligence" or the "Apollonian Mind."

    Peter Thiel on Ross Douthat’s New York Times podcast raving about the antichrist

    Thiel’s four-part lecture series about the antichrist https://luma.com/antichrist

    Nicole Shanahan’s rant about demonic Burning Man

    New Yorker coverage of Curtis Yarvin’s “Dark Enlightenment”

    Iain McGilchrist’s wonderful book The Matter with Things

    Support the show

    Join Chad's newsletter to learn about all new offerings, courses, trainings, and retreats.

    Finally, you can support the podcast here.

    Mostra di più Mostra meno
    23 min
  • Will Artificial Intelligence Make Us More or Less Wise?
    Feb 27 2025

    In this episode I explore the complex relationship between artificial intelligence (AI) and wisdom, particularly focusing on discernment. I argue that while AI can hinder discernment by perpetuating biases and misinformation, it also holds some potential for cultivating it through tools that aid meditation and self-reflection. I also emphasize the importance of truth and self-awareness in this "age of AI." Ultimately, I argue that discernment is a uniquely human quality that requires ongoing effort and vigilance, whether aided by AI or not.

    This one was a long-time coming so I hope you get as much out of listening as I did writing it!

    More on OpenAI's recent decision to remove some of the content guardrails on ChatGPT.

    This episode was adapted from a guest post on Michael Spencer's AI Supremacy newsletter.

    Support the show

    Join Chad's newsletter to learn about all new offerings, courses, trainings, and retreats.

    Finally, you can support the podcast here.

    Mostra di più Mostra meno
    29 min
  • Zombies, Transhumanists, and the Worldview Crisis
    Oct 5 2024

    Continuing our deep dive into what it means to be human in the age of AI, and inspired by the physicalist yet transcendent worldviews of the transhumanists, in this episode I start to explore the concept of worldviews, focusing first on physicalism / scientific materialism (the idea that matter / energy is fundamental), and how that particular metaphysics or worldview was based entirely on a series of assumptions that were never empirically proven. We contextualize all of it via the postmodern zombie mythology.

    I focus on the unexplained anomalies from quantum physics as how they undermine the physicalist worldview. I then explore the reasons that physicalism is so intractable as a worldview.

    All of this is just to set the table for an exploration of #idealism in my next episode (the idea that mind or consciousness is fundamental). Then we can finally turn our attention to transhumanism and human faculties.

    You can find a YouTube version of this episode here.

    Sam Altman’s “Intelligence Age” post

    Quantum measurement explained:

    https://www.youtube.com/watch?v=IHDMJqJHCQg

    https://www.youtube.com/watch?v=-kxmR82QMN8

    Quantum entanglement explained:

    https://www.youtube.com/watch?v=rqmIVeheTVU

    https://www.youtube.com/watch?v=ZuvK-od647c

    John Vervaeke on being rational and spiritual

    John Vervaeke on Zombies

    The famous Einstein - Bergson debate of 1922

    My video on the hurdles to AGI

    Support the show

    Join Chad's newsletter to learn about all new offerings, courses, trainings, and retreats.

    Finally, you can support the podcast here.

    Mostra di più Mostra meno
    47 min
  • What Psychedelic States of Consciousness Tell Us about AI
    Aug 18 2024

    The irony of the application of the word “hallucination” to LLMs making mistakes is that they are completely incapable of having psychedelic experiences. Why does that matter?

    In this mind-bending exploration, we dive into the fascinating intersection of artificial intelligence and expanded states of consciousness. We examine how imagination, creativity, and innovation seem to arise more frequently in altered or "holotropic" states of consciousness - such as through meditation, breathwork, dreams, dancing, psychedelics, or other experiences.

    I argue that current approaches to AI may never be truly inventive or creative, as they lack the ability to model the abductive reasoning and intuitive leaps that often occur in these holotropic states. To support this thesis, we explore historical examples of scientific and philosophical breakthroughs that emerged from dreams, visions, and other non-ordinary states of consciousness.

    In short, I am challenging the narrative that AI will soon surpass human intelligence, suggesting there may be profound mysteries of the human mind that AI cannot replicate, and offering a more sober and realistic view of the limitations facing AI research in attempting to model the wonders of human cognition and consciousness.

    This video is part of a series about the myths, hype, and ideologies surrounding AI.

    YouTube version
    Support Chad

    Stan Grof's collected works
    My previous video on the impediments to AGI
    Paper using LLMs to model abductive reasoning
    Willis Harman's Higher Creativity
    Effects of conscious connected breathing on cortical brain activity, mood and state of consciousness in healthy adults

    Support the show

    Join Chad's newsletter to learn about all new offerings, courses, trainings, and retreats.

    Finally, you can support the podcast here.

    Mostra di più Mostra meno
    32 min
  • Impediments to Creating Artificial General Intelligence (AGI)
    Jul 15 2024

    Artificial general intelligence, or superintelligence, is not right around the corner like AI companies want you to believe, and that's because intelligence is really hard.

    Major AI companies like OpenAI and Anthropic (as well as Ilya Sutskever’s new company) have the explicit goal of creating artificial general intelligence (AGI), and claim to be very close to doing so using technology that doesn’t seem capable of getting us there.

    So let's talk about intelligence, both human and artificial.

    What is artificial intelligence? What is intelligence? Are we going to be replaced or killed by superintelligence robots? Are we on the precipice of a techno-utopia, or some kind of singularity?

    These are the questions I explore, to try to offer a layman’s overview of why we’re far away from AGI and superintelligence. Among other things, I highlight the limitations of current AI systems, including their lack of trustworthiness, reliance on bottom-up machine learning, and inability to provide true reasoning and common sense. I also introduce abductive inference, a rarely discussed type of reasoning.

    Why do smart people want us to think that they’ve solved intelligence when they are smart enough to know they haven’t? Keep that question in mind as we go.

    YouTube version originally recorded July 1, 2024....

    Support Chad

    James Bridle’s Ways of Being (book)
    Ezra Klein’s comments on AI & capitalism
    How LLMs work

    Gary Marcus on the limits of AGI
    More on induction and abduction
    NYTimes deep dive into AI data harvesting
    Sam Altman acknowledging that they’ve reached the limits of LLMs
    Mira Murati saying the same thing last month
    Google’s embarrassing AI search experience

    AI Explained’s perspective on AGI
    LLMs Can’t Plan paper
    Paper on using LLMs to tackle abduction
    ChatGPT is Bullshit paper
    Philosophize This on nostalgia and pastiche

    Please leave a comment with your thoughts, and anything I might have missed or gotten wrong. More about me over here

    Support the show

    Join Chad's newsletter to learn about all new offerings, courses, trainings, and retreats.

    Finally, you can support the podcast here.

    Mostra di più Mostra meno
    52 min