Chaos Agents copertina

Chaos Agents

Chaos Agents

Di: Sara Chipps and Becca Lewy
Ascolta gratuitamente

3 mesi a soli 0,99 €/mese

Dopo 3 mesi, 9,99 €/mese. Si applicano termini e condizioni.

A proposito di questo titolo

Technologists Sara Chipps and Becca Lewy dive into the chaos of artificial intelligence—unpacking the tech, trends, and ideas reshaping how we work, create, and think. Smart, funny, and just a little bit existential.Copyright 2026 Sara Chipps and Becca Lewy Politica e governo Successo personale Sviluppo personale
  • Can You Build Anything in a Week? GPUs, Code Gen, and the End of Engineers - With Harper Reed
    Jan 20 2026

    Becca just got back from NeurIPS, the academic AI conference that feels like an adult science fair. We dig into research on training large AI models across cheap GPUs and slow internet connections—and why that could dramatically lower the barrier to building AI.

    Then we’re joined by Harper Reed, CEO of 2389, for a wide-ranging conversation about code generation, coaching-based engineering teams, and why “production code” might have always been a myth. We talk vibe coding (begrudgingly), the shifting role of software engineers, taste vs. technical skill, and what happens when you can build almost anything in a week.

    Smart, funny, and a little unsettling—Chaos Agents at full volume.

    🎓 Academic AI & research culture
    1. NeurIPS (Conference on Neural Information Processing Systems)
    2. NeurIPS 2024 Accepted Papers

    🧠 Distributed training, GPUs & efficiency
    1. NVIDIA H100 Tensor Core GPU (referenced GPU class)
    2. Pluralis Research (distributed training across low-bandwidth networks)

    ⚙️ Core AI concepts mentioned
    1. GPU vs CPU explained (parallel vs sequential compute)
    2. Data Parallelism vs Model Parallelism (training overview)

    🧑‍💻 Code generation & developer tools
    1. Claude Code (Anthropic code-gen tooling)
    2. Cursor (AI-first code editor, discussed implicitly)

    🛠️ Agent workflows & infrastructure
    1. Matrix (open-source, decentralized chat protocol)
    2. Model Context Protocol (MCP) overview

    🧩 Utilities & recommendations
    1. Jesse Vincent’s Superpowers (Claude workflow enhancer)
    2. Fly.io (deployment platform referenced)
    3. Netlify (deployment & hosting)

    🧪 Related Chaos Agents context
    Mostra di più Mostra meno
    59 min
  • The Magic Cycle, AI Detectors, and the End of Writing as Proof - With Clay Shirky
    Jan 6 2026

    Sara’s back from visiting her New Jersey Christian high school—where she gets hit with a genuinely spicy question: How do you reconcile AGI with faith? From there, we go straight into the bigger theme of the episode: education is getting stress-tested by AI in real time.

    Becca breaks down Google’s “magic cycle” — the uncomfortable lesson of inventing transformative research (Transformers, BERT) and then watching someone else ship it to the world. Sara shares what she’s learning about research workflows moving beyond “just chat,” including multi-agent setups for planning, searching, reading, and synthesis.

    Then we’re joined by Clay Shirky, Vice Provost for AI & Technology in Education at NYU, to talk about what’s actually happening on campuses: why students integrated AI “sideways” before institutions could respond, why AI detectors are a trap (and who they harm most), and why the real shift isn’t assignments — it’s assessment.

    We dig into what comes next: oral exams, in-class scaffolding, and designing learning around productive struggle—not just output. And we end in a place that’s both funny and unsettling: the rise of AI “personalities,” RLHF as “reinforcement learning for human flattery,” and what it means when a machine is always on your side.

    Because whether we like it or not: a well-written paragraph is no longer proof of human thought.

    🧠 Foundational AI papers & breakthroughs
    1. Attention Is All You Need (Transformers, 2017)
    2. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

    🧪 Google’s “Magic Cycle” framing
    1. Accelerating the magic cycle of research breakthroughs and real-world applications Google Research
    2. How AI Drives Scientific Research with Real-World Benefit (Google Blog)

    🚨 Shipping pressure: Bard + “code red” era
    1. Reuters: Alphabet shares dive after Bard flubs info, ~$100B market cap hit (https://www.reuters.com/technology/google-ai-chatbot-bard-offers-inaccurate-information-company-ad-2023-02-08/) Reuters
    2. Google Blog: Bard updates from Google I/O 2023 (https://blog.google/technology/ai/google-bard-updates-io-2023/)
    Mostra di più Mostra meno
    54 min
  • Speed vs Quality, Hallucinations, and the AI Learning Rabbit Hole - With Nir Zicherman
    Dec 23 2025

    Sara breaks down perceptrons (1957!) as the tiny “matrix of lights” idea that eventually became neural networks—then we jump straight into modern AI chaos.

    Oboe’s Nir Zuckerman walks us through the messy reality of building consumer-grade AI for education: every feature is a tradeoff between loading fast and being good, and “just use a better model” doesn’t magically solve it. We talk guardrails, web search, multi-model pipelines, and why learning tools should feel lightweight—more like curiosity than homework. Also: Becca’s “how does a computer work?” obsession and a book recommendation that might change your life.

    🧠 AI Concepts & Foundations
    1. Perceptron (Wikipedia)
    2. Neural Networks Explained
    3. Scaling Laws for Neural Language Models
    4. FLOPS (Floating Point Operations Per Second)

    🎓 Learning, Education & AI
    1. Oboe
    2. AI as a Personal Tutor (Overview)
    3. Why Tutors Are So Effective

    🏗️ Building AI Products
    1. Speed vs Quality Tradeoffs in LLM Apps
    2. LLM Orchestration Patterns
    3. Retrieval-Augmented Generation (RAG)
    4. LLM Hallucinations: Causes & Mitigation

    📚 Books Mentioned
    1. Code: The Hidden Language of Computer Hardware and Software
    2. Perceptrons

    🧪 History of AI
    Mostra di più Mostra meno
    49 min
Ancora nessuna recensione