Episodi

  • Why “Intelligence Density per GB” is a Circular Metric (Elon Musk’s AI Adequacy Test)
    Jan 19 2026

    Elon Musk proposed “intelligence density per gigabyte” as a metric for AI adequacy — but what is intelligence, really? In this episode of This Is AGI, we dissect why IQ-style benchmarks and test scores create an operational (and often circular) definition of intelligence, using a physics analogy: measuring “weight” only by spring stretch without Newtonian gravity.


    We also connect the argument to modern frontier model benchmarking (ARC-AGI, FrontierMath), and explain why today’s scores still don’t map cleanly to a single, theoretically grounded concept of intelligence — and what it would take to fix that.


    Topics: AI benchmarks, frontier models, intelligence measurement, ARC-AGI, FrontierMath, evaluation theory, operational definitions, AGI.


    #ArtificialIntelligence #AGI #FrontierModels #AIBenchmarks #ElonMusk #LLM #ARCAGI

    Mostra di più Mostra meno
    8 min
  • Elon Musk and AGI Intelligence Density
    Jan 12 2026

    In this episode, Alex Chadyuk unpacks the meaning of intelligence density as recently discussed by Elon Musk. Although intelligence has no standard metric when it comes to artificial general intelligence (AGI), Alex describes how intelligence density per gigabyte can be compared given a specific model eval.Topics covered: AGI theory, intelligence metrics, AI evals, neural network compression, quantization vs distillation, self-driving AI, model efficiency, Elon Musk on AI, operational definitions of intelligence.Listen to This Is AGI for clear, technical, no-hype thinking about artificial general intelligence.

    Mostra di più Mostra meno
    7 min
  • GPT Models are Brilliant Storytellers: Episodic vs Semantic Memory and the Missing Half of AGI
    Jan 5 2026

    Are GPT models really intelligent—or are they just brilliant storytellers? In this episode of This Is AGI, Alex Chadyuk explores the critical difference between episodic and semantic memory, explains why today’s generative AI excels at narrative recall but lacks true world models, and argues why this gap separates modern GPTs from genuine artificial general intelligence (AGI). Expect a clear, thought-provoking analysis connecting human cognition, memory, and AI—challenging common assumptions about understanding, reasoning, and hallucinations in large language models.

    Mostra di più Mostra meno
    8 min
  • This Is AGI (S2E5): Cox’s Proof of Plausibility as Probability
    Dec 29 2025

    Over the last three or four centuries, different mathematicians proposed several competing versions of a calculus formalizing the reasoning about plausibility, but there was a lack of consensus about the axioms that should sit at the foundation of such a plausibility calculus. This conundrum was finally broken in 1961 by Richard Cox, a physicist at the Johns Hopkins University. Today, we will discuss his brilliant idea.

    Mostra di più Mostra meno
    7 min
  • This Is AGI (S2E4): The Art of Conjecture
    Dec 21 2025

    How AGI is going to measure the plausibility on uncertain statements in the real-world scenarios of incomplete information so that, among other things, we too can know what AGI is thinking?

    Mostra di più Mostra meno
    6 min
  • This Is AGI (S2E3): Will AGI Obey Logic?
    Dec 15 2025

    A charge is often laid at the door of the large language models (LLMs) that they rely on probabilistic generation, assuming that this is somehow a bad thing, and that a more deterministic behaviour would somehow be a better idea for the future artificial general intelligence (AGI). Before the advent of the LLMs, almost all practical computer systems followed deterministic logic encoded in their software, but as I will argue in this episode, the future AGI will be neither deterministic nor deductively logical.


    This episode is broadly based on a chapter from E.T.Jaynes (2003) Probability theory: The logic of science. Cambridge, UK: Cambridge University Press.

    Mostra di più Mostra meno
    11 min
  • This Is AGI (S2E2): Will AI Find God?
    Dec 8 2025

    Will artificial intelligence discover God?

    Mostra di più Mostra meno
    8 min
  • This Is AGI (S2E1): Classes, Attributes & Relationships
    Dec 1 2025

    We take objects, classes, and relationships for granted, but they are just conventions we’ve agreed to use, not truths carved into reality. In this episode, we explore why these human-centric conventions matter, and why guiding AGI to adopt them may be the difference between an intelligible world model and one we can’t understand at all.

    Mostra di più Mostra meno
    8 min