The Better AI Gets, the Further We Seem from AGI
Impossibile aggiungere al carrello
Rimozione dalla Lista desideri non riuscita.
Non è stato possibile aggiungere il titolo alla Libreria
Non è stato possibile seguire il Podcast
Esecuzione del comando Non seguire più non riuscita
-
Letto da:
-
Di:
A proposito di questo titolo
Let's take a grounded look at the state of the AI industry in early 2026 and ask whether we’re actually any closer to artificial general intelligence (AGI) or superintelligence than we were a few years ago. Despite massive valuations for companies like OpenAI and bold promises from AI lab leaders, today’s systems still struggle with hallucinations, common sense, and a genuine understanding of the world.
So join me as I revisit core assumptions behind current AI approaches—especially the ideas that the mind is computable and that scaling up large language models is enough to “solve” intelligence, and why many researchers are now pivoting from the “age of scaling” to an “age of research” into the nature of intelligence itself.
What happens to AI company valuations if superintelligence remains out of reach for the foreseeable future?
And how should we rethink intelligence beyond language, code, and computation?
BREAKING: Demis Hassabis of Google Deepmind now agrees that LLMs are a dead-end on the road to AGI
Substack version of this episode
My 2024 deep dive into the impediments to AGI
What non-ordinary states of consciousness tell us about intelligence
Ilya Sutskever on Dwarkesh
The LLM memorization crisis
On the Tenuous Relationship between Language and Intelligence
Gary Marcus on The Real Eisman (of Big Short fame)
Fei-Fei Li’s World Labs: https://www.worldlabs.ai/blog
Support the show
Join Chad's newsletter to learn about all new offerings, courses, trainings, and retreats.
Finally, you can support the podcast here.