Episodi

  • Re-Architecting Education for a Pro-Human AI Future with Babak Mostaghimi
    Jan 28 2026

    Join us for an inspiring conversation with Babak Mostaghimi, Founding Partner at LearnerStudio and the former Assistant Superintendent who led Gwinnett County Public Schools' pioneering AI readiness initiative. Babak guides us through the necessary shift from using AI merely to make broken systems faster, to using it as a tool that unlocks human potential. He shares LearnerStudio’s "Three Horizons" model of innovation, explaining why schools are stuck in an industrial past and how we can re-architect them for a future focused on life, career, and democracy.

    We dive into practical strategies, like the difference between "snorkeling" and "scuba diving" in AI literacy, and why we must "Marie Kondo" our curriculum to make space for what truly matters: our shared humanity. From 7th graders using AI to tackle food insecurity to teachers building their own feedback bots, this episode offers a compelling vision for how we can ensure technology serves the human experience rather than replacing it.

    Key Discussion Points:

    • Pro-Human AI: Babak’s argument against using AI solely for efficiency, "Nobody likes the current system. Why are we making it faster?" and the case for using tools to unlock creativity and connection.

    • The Three Horizons Model: A framework for understanding education's evolution from the industrial model (Horizon 1) to the efficiency/equity movement (Horizon 2), and finally to a learner-centered ecosystem (Horizon 3).

    • Marie Kondo-ing the Curriculum: The necessity of clearing out antiquated content standards to create the psychological safety and time for relationship-driven, real-world learning.

    • Snorkeling vs. Scuba Diving: Why AI readiness cannot be a niche magnet program but must be a universal skill set that allows every student to navigate ("swim"), explore ("snorkel"), or deeply master ("scuba dive") the technology.

    • Agency in Action: Real-world examples of students and teachers taking control, including a 7th grader using the Inkwire tool to investigate food insecurity and educators designing bespoke feedback agents with PlayLab.The Three Horizons of Learning: A Conversation with Babak Mostaghimi

    Mostra di più Mostra meno
    42 min
  • The Skeptic and The Optimist: Navigating AI in Higher Education
    Jan 8 2026

    Join us for a candid debate between two colleagues who view the future of AI in education through very different lenses. We are joined by Dr. Jason Margolis, an AI skeptic who worries about the atrophy of critical thinking, and Dr. Nicole Schilling, an AI optimist who sees these tools as essential scaffolds for complex problem-solving.

    Together, they model the concept of "Critical Friends," engaging in respectful but challenging dialogue on a polarizing topic. We dive deep into the ethics of the "8-minute dissertation," the tension between efficiency and the learning process, and why we might need flexible guidelines rather than rigid policies in this rapidly changing landscape. Whether you are an educator, a leader, or just someone trying to figure out where the human ends and the machine begins, this conversation offers a roadmap for navigating the grey areas of innovation.

    Key Discussion Points:

    • Skeptic vs. Optimist: Jason’s concern about "outsourcing our brains" versus Nicole’s vision of AI as a partner in social constructionism.

    • The "8-Minute Dissertation": A critical look at what is lost when we prioritize the product (the degree) over the process (the struggle of learning).

    • Ethical AI Use: Examples of high-level use, such as training an AI model to act as a rigorous dissertation committee rather than writing the paper for you.

    • Bias and Power: Addressing the "racist undertones" in algorithms and questioning whose interests are really served by the rapid adoption of AI.

    • Policy vs. Guidelines: Why creating rigid policies for fast-moving tech is often futile, and the argument for developing ethical "guidelines" instead.

    • The Critical Friends Model: How to disagree productively and maintain professional relationships in an era of polarized viewpoints.

    Mostra di più Mostra meno
    42 min
  • Redesigning the Syllabus for Deeper Learning: AI, Empathy, and Assessment
    Dec 17 2025

    Join us for an insightful conversation with Dr. Dana Riger, UNC's inaugural Faculty Fellow for Generative AI, as she guides us through the rapid paradigm shift brought on by AI in higher education. Dr. Riger shares her journey from a "fear-driven" assessment redesign, after discovering ChatGPT, to developing a nuanced, values-driven framework for integrating and avoiding AI in the classroom.

    We dive into practical strategies, like redesigning traditional research papers into creative, AI-avoidant multimedia projects, and intentionally integrating AI for skills development, such as using chatbots for practice dialogues on polarizing topics. Dr. Riger also addresses the institutional challenge of avoiding "one-size-fits-all" AI policies and underscores the importance of fostering an open dialogue. Ultimately, this episode offers a compelling vision for the future of teaching, emphasizing that the human educator's unique value lies in fostering empathy, presence, and critical dialogue, not just imparting knowledge.

    Key Discussion Points:

    • The AI Paradigm Shift: Dr. Riger's initial reaction to ChatGPT and her immediate, fear-driven assessment redesign in 2022.

    • The Nuanced Approach: Distinguishing between AI-avoidant (experiential, creative) and AI-integrated (intentional skill-building) assessments.

    • Practical Examples: How a multimedia project replaces a traditional paper, and using AI to practice difficult, emotionally laden conversations.

    • Leading with Collaboration: Why policing AI use is ineffective and the importance of respecting student autonomy and ethical objections.

    • Institutional Guidance: The missteps of mandated, uniform AI policies and the need for a thoughtful "middle ground" approach.

    • The Value of Process: Shifting assessment focus from the final product to the process of learning (drafts, revisions, process logs).

    • The Core Question: What are the unique, human-centered qualities (empathy, presence) that educators must prioritize in the age of AI?

    Mostra di più Mostra meno
    42 min
  • Trailblazing AI Literacy: Connor Mulvaney’s Rural Classroom Revolution (Rebroadcast)
    Nov 19 2025
    In this episode from the archives, Montana science teacher and district AI lead Connor Mulvaney joins host Lydia Kumar to share how he turned fishing photos, traffic-light rubrics, and a healthy dose of curiosity into AI leadership in Montana and across the nation. Fresh off announcing aiEDU’s largest Trailblazers Fellowship expansion, Connor shares stories about leading students and educators to responsible AI adoption. In this episode, you’ll learn:
    • Break-the-Ice Questions – Three questions that instantly surface student misconceptions (and enthusiasm) about AI.
    • Fake Fish, Real Ethics – Using deepfake trout to spark serious debate on consent, bias, and digital citizenship.
    • Trailblazers 2.0 – What’s inside the 10-week fellowship (virtual sessions, $875 stipend, national recognition) and why rural teachers asked for it.
    This episode is for K-12 educators, district leaders, and mission-driven education organizations who want to shift AI conversations from fear and plagiarism to possibility and purpose.
    Mostra di più Mostra meno
    40 min
  • Danelle Brostrom on Leading AI: Privacy, Humanity, and Progress in Schools
    Nov 12 2025

    K-12 EdTech coach Danelle Brostrom joins us to talk about bringing curiosity, guardrails, and humanity to AI in schools. We dig into what we should learn from the social-media era, how librarians are frontline partners for information literacy, the real risks inside edtech privacy policies (and how districts can negotiate them), and concrete ways AI can expand access, like instant translation, reading-level adjustments, and executive-function supports. If you’re a district leader, principal, or teacher trying to move from paralysis to practical action, this conversation is your on-ramp.

    Key Takeaways
    • Don’t repeat social media’s mistakes. Protect in-person connection; teach students how to spot manipulated media and deepfakes.

    • Librarians = misinformation SWAT team. Pair EdTech with media specialists to teach reverse-image search, corroboration, and bias checks.

    • AI is already in your stack. Inventory tools teachers use; many “non-AI” products now include AI features that touch student data.

    • Equity in action. Real-time translation, leveled texts, and scaffolded task breakdowns can immediately widen access—offer to all students.

    • PD that sticks. Start with low-stakes personal uses (meal plans, resumes), then ethics, then classroom workflows—build a safe space to wrestle.

    • Listen first. Talk to students about how they’re using AI; invite skeptics to the table.

    • Leadership mindset. Curiosity, grace, and progress over perfection.

    Mostra di più Mostra meno
    37 min
  • Duke's Ahmed Boutar on AI Alignment: Ensuring Users Get Desired Results
    Nov 5 2025

    In this episode, we’re joined by Ahmed Boutar, an Artificial Intelligence Master’s Student at Duke University, who brings a rigorous engineering focus to the ethics and governance of AI. Ahmed’s work centers on ensuring new technology aligns with human values, including his research on Human-Aligned Hazardous Driving (HAHD) systems for autonomous vehicles.

    This conversation is an urgent exploration of the practical and ethical challenges facing education and industry as AI progresses rapidly. Ahmed provides a critical perspective on how to maintain human judgment and oversight in a world increasingly powered by Large Language Models.

    Key Takeaways
    • The Interpretation Imperative: The most critical role of an educator today is to ensure that students move beyond simply accepting AI output to interpreting it, explaining it, and wrestling with the material in their own words. This is the ultimate guardrail against outsourcing thinking.

    • The Alignment Problem: AI failures often stem from misalignment between the intended goal (outer alignment) and the goal the AI actually optimizes for (inner alignment). The chilling example provided is an AI that solved the objective of "moving the fastest" by designing a tall structure that immediately fell down to maximize speed.

    • Transparency is Governance: For high-stakes decisions like loan applications or hiring, users and regulators must demand transparency into why an AI made a prediction. Responsible development requires diverse perspectives on design teams to prevent innate biases in training data from causing discrimination.

    • Adoption Over Abandonment: As humans, we cannot stop AI's progress. Instead, we must adopt it to augment productivity, while simultaneously creating policy and guardrails that ensure fair and responsible use.

    • A Hope for Scientific Discovery: While concerned about the concentration of AI development in a few large companies, Ahmed remains optimistic about AI's potential in scientific fields like drug discovery and proactively addressing global crises, as seen during the COVID-19 pandemic.

    Mostra di più Mostra meno
    44 min
  • The Lifeline of Learning: Dr. Sawsan Jaber on Radical Love, Agency, and Humanizing Education in the Age of AI
    Oct 29 2025

    In this episode, we’re joined by Dr. Sawsan Jaber, a global educator, equity strategist, and author of Pedagogies of Voice. Dr. Jaber’s work is rooted in her lived experience as the daughter of refugees and her profound belief that classrooms must be healing spaces that nurture student voice and radical love.

    This conversation is an urgent exploration of how K-12 leaders can balance the adoption of AI with the non-negotiable mission of humanizing education, ensuring that new technology becomes a tool for liberation, not a weapon for assimilation.

    Key Takeaways
    • The Pendulum of Power: Education constantly swings between standardization (which turns students into "invisible statistics") and human-centered reform. AI presents a moment to resist the swing and focus on qualitative, asset-based learning.

    • Teaching as a Lifeline: Core curriculum skills must be framed as "liberatory skills," like teaching a period as a tool to force a reader to sit in your words, giving students the power to advocate for themselves and their communities.

    • The Criticality Problem: Dr. Jaber cautions against the "dystopian thinking" of letting AI do the thinking. Leaders must prioritize teaching criticality and inquiry, ensuring students never sacrifice unique thought for easily generated output.

    • Trust is the Best AI Detector: The foundation for responsible AI use is built through trust-based relationships. Educators must co-create norms with students and model vulnerability, positioning themselves as fellow learners rather than simply gatekeepers.

    • The Antidote to Hate: Classrooms should be healing spaces that build radical love and mutual understanding. This mission is the most powerful antidote to the culture of fear and single-story narratives that plague society today.

    Mostra di più Mostra meno
    48 min
  • Redefining Education with AI: Vera Cubero on Project-Based Learning and Human Connection (Rebroadcast)
    Oct 22 2025

    In this episode from the archives, we’re joined by Vera Cubero, the Emerging Technologies Consultant for the North Carolina Department of Public Instruction (NCDPI) and a co-author of one of the nation's first K-12 AI guidelines. Vera shares her frontline experience transitioning from a classroom teacher piloting 1-to-1 Chromebooks to leading a statewide AI initiative. This conversation is a crucial exploration of how education must fundamentally change its approach—moving beyond simple tech "substitution" to truly "redefine" learning, assessment, and the role of the teacher to prepare all students for an AI-driven future.

    Key Takeaways
    • Beyond the Digital Worksheet: Vera warns that AI in education risks repeating the failures of 1-to-1 Chromebook adoption, where "substitution" (digital worksheets) won out over true learning "redefinition."

    • The AI-Enabled Project: The future of learning isn't just using AI; it's pairing AI with Project-Based Learning (PBL). AI becomes a powerful tool for students to solve complex, real-world problems, moving assessment away from simple essays.

    • Durable Skills Over Rote Answers: Vera argues that AI makes rote memorization obsolete. The new curriculum must focus on building "durable skills" like critical thinking, collaboration, and creativity—skills the future workforce demands.

    • The Guide on the Side: AI doesn't replace teachers; it changes their role. The focus must shift from the "sage on the stage" (delivering content) to the "guide on the side" (coaching, fostering human connection, and guiding student inquiry).

    • AI as the Great Equalizer: Vera's biggest concern is equity. Public schools must act as the "great equalizer," ensuring all students—especially from marginalized communities—gain AI fluency, or the economic divide will widen dramatically.

    Mostra di più Mostra meno
    40 min