AI Ethics Now copertina

AI Ethics Now

AI Ethics Now

Di: Tom Ritchie Jennie Mills IATL WIHEA University of Warwick
Ascolta gratuitamente

3 mesi a soli 0,99 €/mese

Dopo 3 mesi, 9,99 €/mese. Si applicano termini e condizioni.

A proposito di questo titolo

AI Ethics Now is a podcast dedicated to exploring the complex issues surrounding artificial intelligence from a non-specialist perspective, including bias, ethics, privacy, and accountability. Join us as we discuss the challenges and opportunities of AI and work towards a future where technology benefits society as a whole. This podcast was first developed by Dr Tom Ritchie and Dr Jennie Mills as part of The AI Revolution: Ethics, Technology, and Society module, taught as part of IATL at the University of Warwick.Tom Ritchie, Jennie Mills, IATL, WIHEA, University of Warwick
  • 11. AI and Assessments: When Students Ask "Does This Sound Like Me?"
    Jan 18 2026

    What happens when students delegate not just writing, but reasoning itself to AI?

    Dr Chahna Gonsalves, Senior Lecturer at King's Business School, reveals how generative AI is transforming critical thinking in higher education through what she calls "epistemic offloading", the process of outsourcing intellectual work to tools like ChatGPT.

    This conversation examines how students are using AI to interpret readings, generate argument structures, and pre-evaluate their own work, shifting responsibility for core intellectual tasks. Chahna explores why AI prizes polish over depth, how this affects students' evaluative judgment, and what happens when students ask "does this sound like me?"

    We discuss the equity implications of tech-savviness, why reflexive AI use matters more than bans, and how Bloom's Taxonomy reveals which cognitive processes students readily offload versus protect. Chahna argues we need transparent conversations about delegation, judgment, and what truly requires human reasoning.

    Essential listening for anyone grappling with AI's role in learning, assessment design, and the future of thinking itself.

    This episode continues our new short series featuring conversations from the ⁠Building Bridges: A Symposium on Human-AI Interaction⁠ held at the University of Warwick on 21 November 2025. The symposium was organised by ⁠Dr Yanyan Li⁠, Xianzhi Chen, and Kaiqi Yu, and jointly funded by the Institute of Advanced Study Conversations Scheme and the Doctoral College Networking Fund, with sponsorship from Warwick Students' Union.

    AI Ethics Now

    Exploring the ethical dilemmas of AI in Higher Education and beyond.

    A University of Warwick IATL Podcast

    This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠IATL module ⁠"The AI Revolution: Ethics, Technology, and Society"⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ at the University of Warwick. The AI Revolution module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'

    This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.

    Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.

    We will discuss:

    • Ethical Dimensions of AI: Fairness, bias, transparency, and accountability.
    • Societal Implications: How AI is transforming industries, economies, and our understanding of humanity.
    • The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity.

    If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.

    Mostra di più Mostra meno
    32 min
  • 10. AI and Dependence: Are We Misdiagnosing the Harms?
    Jan 4 2026

    Do you use ChatGPT or Claude daily for work? Mark Carrigan, Senior Lecturer in Education at Manchester Institute of Education, joins the podcast to discuss why we might be misdiagnosing the harms of generative AI. His research suggests the problems aren't inherent to the technology itself, but arise when AI systems meet the already broken bureaucracies of higher education and other sectors.

    Mark introduces the LLM Interaction Cycle, a framework he developed with philosopher of technology, Milan Stürmer, to understand how we engage with AI over time through three phases: positioning (how we assign roles to the AI), articulation (how we put our needs into words), and attunement (the sense that the AI understands us). He explains how use that begins as purely transactional often drifts toward something more affective as models build memory and context about us, and why this drift matters for how we think about ethical AI use.

    We go on to explore teacher agency in the age of generative AI, examining why fear of appearing ignorant prevents honest conversations between educators and students. Mark discusses three key risks facing universities:

    • lock-in (dependency on specific platforms),
    • loss of reflection (increasingly habitual rather than thoughtful use), and;
    • commercial capture (vendor interests shaping institutional practices).

    He argues that reflective use isn't just beneficial but ethically necessary, yet the pressures facing academics and students make reflection increasingly difficult.

    The conversation finishes by examining why universities in financial crisis are particularly vulnerable to both the promises and pitfalls of AI adoption, how institutional AI strategies risk creating new waves of disruption, and why understanding student realities (including significant paid work commitments) is essential to addressing concerns about AI in education. Mark concludes by making the case that we cannot understand the problems of generative AI without understanding the wider systemic crisis in higher education.

    This episode launches our new short series featuring conversations from the Building Bridges: A Symposium on Human-AI Interaction held at the University of Warwick on 21 November 2025. The symposium was organised by Dr Yanyan Li, Xianzhi Chen, and Kaiqi Yu, and jointly funded by the Institute of Advanced Study Conversations Scheme and the Doctoral College Networking Fund, with sponsorship from Warwick Students' Union.

    AI Ethics Now

    Exploring the ethical dilemmas of AI in Higher Education and beyond.

    A University of Warwick IATL Podcast

    This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠IATL module ⁠"The AI Revolution: Ethics, Technology, and Society"⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ at the University of Warwick. The AI Revolution module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'

    This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.

    Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.

    We will discuss:

    • Ethical Dimensions of AI: Fairness, bias, transparency, and accountability.
    • Societal Implications: How AI is transforming industries, economies, and our understanding of humanity.
    • The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity.

    If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.

    Mostra di più Mostra meno
    35 min
  • 9. AI and Bias: How AI Shapes What We Buy
    Dec 15 2025

    As you search for Christmas gifts this season, have you asked ChatGPT or Gemini for recommendations? Katarina Mpofu and Jasmine Rienecker from Stupid Human join the podcast to discuss their groundbreaking research examining how AI systems influence public opinion and decision-making. Conducted in collaboration with the University of Oxford, their study analysed over 8,000 AI-generated responses to uncover systematic biases in how AI systems like ChatGPT and Gemini recommend brands, institutions, and governments.

    Their findings reveal that AI assistants aren't neutral—they have structured and persistent preferences that favour specific entities regardless of how questions are asked or who's asking. ChatGPT consistently recommended Nike for running shoes in over 90% of queries, whilst both models claimed the US has the best national healthcare system. These preferences extend beyond consumer products into government policy and educational institutions, raising critical questions about fairness, neutrality, and AI's role in shaping global narratives.

    We explore how AI assistants are more persuasive than human debaters, why users trust these systems as sources of truth without questioning their recommendations, and how geographic and cultural biases develop through training data, semantic associations, and user feedback amplification. Katarina and Jasmine explain why language matters - asking in English produces US-centric biases regardless of where you're located - and discuss the implications for smaller brands, niche markets, and diverse user groups systematically disadvantaged by current AI design.

    The conversation examines whether companies understand they're building these preferences into systems, the challenge of cross-domain bias contamination, and the urgent need for frameworks to identify and benchmark AI biases beyond protected characteristics like race and gender.

    AI Ethics Now

    Exploring the ethical dilemmas of AI in Higher Education and beyond.

    A University of Warwick IATL Podcast

    This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠IATL module ⁠"The AI Revolution: Ethics, Technology, and Society"⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ at the University of Warwick. The AI Revolution module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'

    This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.

    Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.

    We will discuss:

    • Ethical Dimensions of AI: Fairness, bias, transparency, and accountability.
    • Societal Implications: How AI is transforming industries, economies, and our understanding of humanity.
    • The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity.

    If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.

    Mostra di più Mostra meno
    25 min
Ancora nessuna recensione