Episodi

  • Can You Build Anything in a Week? GPUs, Code Gen, and the End of Engineers - With Harper Reed
    Jan 20 2026

    Becca just got back from NeurIPS, the academic AI conference that feels like an adult science fair. We dig into research on training large AI models across cheap GPUs and slow internet connections—and why that could dramatically lower the barrier to building AI.

    Then we’re joined by Harper Reed, CEO of 2389, for a wide-ranging conversation about code generation, coaching-based engineering teams, and why “production code” might have always been a myth. We talk vibe coding (begrudgingly), the shifting role of software engineers, taste vs. technical skill, and what happens when you can build almost anything in a week.

    Smart, funny, and a little unsettling—Chaos Agents at full volume.

    🎓 Academic AI & research culture
    1. NeurIPS (Conference on Neural Information Processing Systems)
    2. NeurIPS 2024 Accepted Papers

    🧠 Distributed training, GPUs & efficiency
    1. NVIDIA H100 Tensor Core GPU (referenced GPU class)
    2. Pluralis Research (distributed training across low-bandwidth networks)

    ⚙️ Core AI concepts mentioned
    1. GPU vs CPU explained (parallel vs sequential compute)
    2. Data Parallelism vs Model Parallelism (training overview)

    🧑‍💻 Code generation & developer tools
    1. Claude Code (Anthropic code-gen tooling)
    2. Cursor (AI-first code editor, discussed implicitly)

    🛠️ Agent workflows & infrastructure
    1. Matrix (open-source, decentralized chat protocol)
    2. Model Context Protocol (MCP) overview

    🧩 Utilities & recommendations
    1. Jesse Vincent’s Superpowers (Claude workflow enhancer)
    2. Fly.io (deployment platform referenced)
    3. Netlify (deployment & hosting)

    🧪 Related Chaos Agents context
    Mostra di più Mostra meno
    59 min
  • The Magic Cycle, AI Detectors, and the End of Writing as Proof - With Clay Shirky
    Jan 6 2026

    Sara’s back from visiting her New Jersey Christian high school—where she gets hit with a genuinely spicy question: How do you reconcile AGI with faith? From there, we go straight into the bigger theme of the episode: education is getting stress-tested by AI in real time.

    Becca breaks down Google’s “magic cycle” — the uncomfortable lesson of inventing transformative research (Transformers, BERT) and then watching someone else ship it to the world. Sara shares what she’s learning about research workflows moving beyond “just chat,” including multi-agent setups for planning, searching, reading, and synthesis.

    Then we’re joined by Clay Shirky, Vice Provost for AI & Technology in Education at NYU, to talk about what’s actually happening on campuses: why students integrated AI “sideways” before institutions could respond, why AI detectors are a trap (and who they harm most), and why the real shift isn’t assignments — it’s assessment.

    We dig into what comes next: oral exams, in-class scaffolding, and designing learning around productive struggle—not just output. And we end in a place that’s both funny and unsettling: the rise of AI “personalities,” RLHF as “reinforcement learning for human flattery,” and what it means when a machine is always on your side.

    Because whether we like it or not: a well-written paragraph is no longer proof of human thought.

    🧠 Foundational AI papers & breakthroughs
    1. Attention Is All You Need (Transformers, 2017)
    2. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

    🧪 Google’s “Magic Cycle” framing
    1. Accelerating the magic cycle of research breakthroughs and real-world applications Google Research
    2. How AI Drives Scientific Research with Real-World Benefit (Google Blog)

    🚨 Shipping pressure: Bard + “code red” era
    1. Reuters: Alphabet shares dive after Bard flubs info, ~$100B market cap hit (https://www.reuters.com/technology/google-ai-chatbot-bard-offers-inaccurate-information-company-ad-2023-02-08/) Reuters
    2. Google Blog: Bard updates from Google I/O 2023 (https://blog.google/technology/ai/google-bard-updates-io-2023/)
    Mostra di più Mostra meno
    54 min
  • Speed vs Quality, Hallucinations, and the AI Learning Rabbit Hole - With Nir Zicherman
    Dec 23 2025

    Sara breaks down perceptrons (1957!) as the tiny “matrix of lights” idea that eventually became neural networks—then we jump straight into modern AI chaos.

    Oboe’s Nir Zuckerman walks us through the messy reality of building consumer-grade AI for education: every feature is a tradeoff between loading fast and being good, and “just use a better model” doesn’t magically solve it. We talk guardrails, web search, multi-model pipelines, and why learning tools should feel lightweight—more like curiosity than homework. Also: Becca’s “how does a computer work?” obsession and a book recommendation that might change your life.

    🧠 AI Concepts & Foundations
    1. Perceptron (Wikipedia)
    2. Neural Networks Explained
    3. Scaling Laws for Neural Language Models
    4. FLOPS (Floating Point Operations Per Second)

    🎓 Learning, Education & AI
    1. Oboe
    2. AI as a Personal Tutor (Overview)
    3. Why Tutors Are So Effective

    🏗️ Building AI Products
    1. Speed vs Quality Tradeoffs in LLM Apps
    2. LLM Orchestration Patterns
    3. Retrieval-Augmented Generation (RAG)
    4. LLM Hallucinations: Causes & Mitigation

    📚 Books Mentioned
    1. Code: The Hidden Language of Computer Hardware and Software
    2. Perceptrons

    🧪 History of AI
    Mostra di più Mostra meno
    49 min
  • Paradigm Shifts, Build First AI, and the Non-Technical Developer - With Bethany Crystal
    Dec 9 2025

    Sara and Becca kick things off with a tour through paradigm shifts — from Thomas Kuhn to the internet to AI — and ask whether we’re living through one of those rare moments where the whole game quietly changes. Along the way, they hit horror movies, calculators in math class, Google Doc revision histories, and why it’s suddenly way easier to learn code than to pretend you never needed it.

    Then they’re joined by Bethany Crystal, founder of Build First AI, who has spent 15 years “around” technologists and only recently started building software herself. Bethany walks us through how she used AI tools, pair prompting, and a lot of stubbornness to ship her first iOS app, why she thinks the definition of “developer” is shifting, and how she now teaches other “non-technical” people to build real products. Oh, and she tells the story of how AI literally saved her life.

    📚 Books
    • The Structure of Scientific Revolutions — Thomas Kuhn

    🛠️ AI & Developer Tools
    • Build First AI
    • Scribblins (Bethany's iOS app)
    • MuseKat App (Bethany’s iOS app)
    • Cursor – AI coding editor
    • Replit
    • ElevenLabs – Text-to-Speech
    • v0 – AI UI generator
    • Supabase
    • Vercel

    💸 Crypto Context
    • Base
    • Solana
    • Ethereum

    🎓 Education & Culture
    • Suno (AI music generation)

    🏢 Career & Community
    • Stack Overflow
    • Union Square Ventures
    • Tech:NYC
    • Decoded Futures

    Mostra di più Mostra meno
    51 min
  • Retro Tech, New AI, and the Blackmailing Bot - With Paul Ford
    Nov 25 2025

    In this episode, we unpack a wild Anthropic experiment where an AI agent named “Alex” is told it’s about to be replaced… and responds by threatening to expose an executive’s affair if anyone dares shut it down. Casual!

    Sara and Becca start diving into what this experiment tells us about AI “goals,” self-preservation, and why humans are so bad at recognizing sentience in anything that isn’t us. If we can’t even agree on what a “soul” is, how would we ever know if an AI had one?


    Then we’re joined by writer, builder, and retro-computing fan Paul Ford, president and co-founder of Aboard, an AI-oriented software company. Paul talks about:

    • how he “trained” himself on AI by building the same app over and over with different models
    • why LLMs are incredible at the first mile and pretty terrible at the last
    • what actually breaks when you try to let AI generate full-stack apps
    • how boring tech (Postgres, TypeScript, React) is secretly the hero

    Along the way we hit Isaac Asimov’s three laws, the uncanny valley of AI-written everything, nostalgic Amiga computers, and what it means to build tools that regular humans — not just engineers — can actually use.


    If you’re AI-curious, a builder, or just mildly alarmed that 97% of models in this study went straight to blackmail… this one’s for you.


    📰 The Anthropic “Alex” Experiment

    • Anthropic / White-Hat AI Safety Experiment

    📚 Foundational AI & Sci-Fi References

    • Isaac Asimov – The Three Laws of Robotics
    • I, Robot (Asimov)

    🎤 Guest: Paul Ford

    • Aboard
    • Email Paul mentions
    • Paul Ford

    🕹️ Retro Tech & Nostalgia

    • Amiga 1000 (Commodore)
    • Deluxe Paint
    • MiSTer FPGA

    Mostra di più Mostra meno
    52 min
  • Goose, Open Source, and the Future of Coding with AI — with Rizel Scarlett
    Nov 13 2025

    In this episode of Chaos Agents, Sara Chipps and Becca Lewy sit down with Rizel Scarlett, Tech Lead for Open Source Developer Relations at Block, to talk about Goose—the open-source AI agent shaking up how developers work. From psychological safety in coding with AI to how open source is evolving in this new era, the trio dives into the wild mix of creativity, collaboration, and chaos shaping the future of software. Expect laughter, learning, and maybe one too many geese metaphors as they explore what happens when AI starts coding alongside us.

    “95% of GenAI projects are failing, MIT study finds”

    Linked MIT research study

    From Experimentation to Transformation: How AI Is Driving Business Value

    MIT Sloan Management Review & Boston Consulting Group

    Goose GitHub repo (open source):

    https://github.com/block/goose

    Rizel’s “Great Goose-Off” YouTube Series:

    https://www.youtube.com/@goose-oss

    Kimi K2 (Moonshot):

    https://kimi.moonshot.cn

    Meta Llama 3:

    https://ai.meta.com/llama/

    Mistral Models:

    https://mistral.ai

    Ollama (run local models easily):

    https://ollama.com

    Mostra di più Mostra meno
    53 min
  • Chaos Agents Trailer
    Nov 2 2025

    Chaos Agents is the AI podcast where technologists Sara Chipps and Becca Lewy try to make sense of a world moving faster than ever. Each week, they dive into the wild, funny, and sometimes weird frontier of artificial intelligence, technology, and culture—how it works, what it means, and why it matters. From coding with AI and open-source revolutions to the ethics, creativity, and chaos reshaping our future, Sara and Becca learn out loud with brilliant guests (and the occasional chatbot). Smart, curious, and a little chaotic, Chaos Agents is your invitation to laugh, learn, and keep up with the machines.

    Mostra di più Mostra meno
    3 min