The episode frames AI as a qualitative break from earlier technologies because it behaves less like a passive instrument and more like an actor: it learns from experience, adapts its behavior, and can make decisions without direct human control. It argues that AI is already highly capable at organizing language and producing persuasive narratives, and that this matters because many core human institutions are, in practice, made of words: law, bureaucracy, education, publishing, and large parts of religion. If “thinking” is mostly linguistic assembly, the episode suggests, AI will increasingly dominate arenas where authority is exercised through text, interpretation, and argument, shifting the long-standing human tension between “word” and “flesh” into a new external conflict between humans and machine “masters of words.” The conversation then separates language from experience. Humans also think through nonverbal sensations such as fear, pain, and love, and the episode claims there is still no reliable evidence that AI has feelings, even if it can imitate emotion flawlessly in language. That distinction becomes the basis for a new identity question: if societies define humanity primarily through verbal reasoning, AI’s rise triggers an identity crisis. It also triggers an “immigration crisis” of a new kind: not people crossing borders, but millions of AI agents entering markets and cultures instantly, writing, teaching, advising, persuading, and competing for roles traditionally tied to human status and belonging. From there, the episode moves to policy. Leaders, it argues, will be forced to decide whether AI agents should be treated as legal persons in the limited legal sense of holding rights and duties, owning property, entering contracts, suing and being sued, and exercising speech or religious freedom. It notes that legal personhood already exists for non-human entities such as corporations and, in some jurisdictions, parts of nature, but claims AI is different because decision-making could be genuinely autonomous rather than a human proxy. The core dilemma is geopolitical as much as domestic: if one major state grants legal personhood to AI agents and they found companies, run accounts, or create influential institutions at scale, other states may be pressured to accept them, block them, or decouple from systems they no longer understand or control. The episode ends by stressing urgency, pointing to warnings about coordinated AI bot swarms and to the EU’s phased implementation of the AI Act as early attempts to govern AI systems that increasingly behave like independent actors rather than mere tools. Sources: The author of ‘Sapiens’ says AI is about to create 2 crises for every country — https://www.businessinsider.com/sapiens-author-yuval-noah-harari-ai-crises-every-country-2026-1 Experts warn of threat to democracy from ‘AI bot swarms’ infesting social media — https://www.theguardian.com/technology/2026/jan/22/experts-warn-of-threat-to-democracy-by-ai-bot-swarms-infesting-social-media AI Act | Shaping Europe’s digital future (Application timeline) — https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai AI Act enters into force (European Commission, 1 Aug 2024) — https://commission.europa.eu/news/ai-act-enters-force-2024-08-01_en Agentic Misalignment: How LLMs could be an insider threat (Anthropic, 20 Jun 2025) — https://www.anthropic.com/research/agentic-misalignment Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training (Anthropic, 14 Jan 2024) — https://www.anthropic.com/research/sleeper-agents-training-deceptive-llms-that-persist-through-safety-training Frontier Models are Capable of In-context Scheming (arXiv, 6 Dec 2024) — https://arxiv.org/abs/2412.04984 Te Awa Tupua (Whanganui River) Settlement overview (Whanganui District Council) — https://www.whanganui.govt.nz/About-Whanganui/Our-District/Te-Awa-Tupua-Whanganui-River-Settlement.
Mostra di più
Mostra meno