The Embodied Lie: How the Speaking Agent Obscures Architectural Entropy
Impossibile aggiungere al carrello
Puoi avere soltanto 50 titoli nel carrello per il checkout.
Riprova più tardi
Riprova più tardi
Rimozione dalla Lista desideri non riuscita.
Riprova più tardi
Non è stato possibile aggiungere il titolo alla Libreria
Per favore riprova
Non è stato possibile seguire il Podcast
Per favore riprova
Esecuzione del comando Non seguire più non riuscita
-
Letto da:
-
Di:
A proposito di questo titolo
(00:00:24) The Illusion of Control in Voice Assistants
(00:04:26) The Two Timelines of AI Systems
(00:07:40) Microsoft's Partial Progress in AI Governance
(00:11:13) The Missing Link: Deterministic Policy Gates
(00:14:53) Case Study 1: The Wrong Site Deletion
(00:18:49) Case Study 2: Inadvertent Disclosure in Meetings
(00:23:03) Case Study 3: External Agents and Internal Data Exposure
(00:27:23) The Event-Driven System Fallacy
(00:27:26) The Misunderstanding of Protocol Standards
Modern AI agents don’t just act — they speak. And that voice changes how we perceive risk, control, and system integrity. In this episode, we unpack “the embodied lie”: how giving AI agents a conversational interface masks architectural drift, hides decision entropy, and creates a dangerous illusion of coherence. When systems talk fluently, we stop inspecting them. This episode explores why that’s a problem — and why no amount of UX polish, prompts, or DAX-like logic can compensate for decaying architectural intent. Key Topics Covered
- What “Architectural Entropy” Really Means
How complex systems naturally drift away from their original design — especially when governed by probabilistic agents. - The Speaking Agent Problem
Why voice, chat, and persona-driven agents create a false sense of authority, intentionality, and correctness. - Why Observability Breaks When Systems Talk
How conversational interfaces collapse multiple execution layers into a single narrative output. - The Illusion of Control
Why hearing reasons from an agent is not the same as having guarantees about system behavior. - Agents vs. Architecture
The difference between systems that decide and systems that merely explain after the fact. - Why UX Cannot Fix Structural Drift
How better prompts, better explanations, or better dashboards fail to address root architectural decay.
- A speaking agent is not transparency — it’s compression.
- Fluency increases trust while reducing scrutiny.
- Architectural intent cannot be enforced at the interaction layer.
- Systems don’t fail loudly anymore — they fail persuasively.
- If your system needs to explain itself constantly, it’s already drifting.
- Platform architects and system designers
- AI engineers building agent-based systems
- Security and identity professionals
- Data and analytics leaders
- Anyone skeptical of “AI copilots” as a governance strategy
- “When the system speaks, inspection stops.”
- “Explanation is not enforcement.”
- “The agent doesn’t lie — the embodiment does.”
Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support.
Ancora nessuna recensione