If Anyone Builds It, Everyone Dies
Why Superhuman AI Would Kill Us All
Impossibile aggiungere al carrello
Puoi avere soltanto 50 titoli nel carrello per il checkout.
Riprova più tardi
Riprova più tardi
Rimozione dalla Lista desideri non riuscita.
Riprova più tardi
Non è stato possibile aggiungere il titolo alla Libreria
Per favore riprova
Non è stato possibile seguire il Podcast
Per favore riprova
Esecuzione del comando Non seguire più non riuscita
12,56 € per i primi 30 giorni
Offerta a tempo limitato
Attiva il tuo abbonamento Audible a 0,99 €/mese per 3 mesi per ottenere questo titolo a un prezzo esclusivo riservato agli iscritti.
Offerta valida fino alle 23.59 del 12 dicembre 2025.
Dopo 30 giorni (60 per i membri Prime), 9,99 €/mese. Puoi cancellare ogni mese
Risparmio di più del 90% nei primi 3 mesi.
Ascolto illimitato della nostra selezione in continua crescita di migliaia di audiolibri, podcast e Audible Original.
Nessun impegno. Puoi cancellare ogni mese.
Disponibile su ogni dispositivo, anche senza connessione.
Dopo esserti registrato per un abbonamento, puoi acquistare questo e tutti gli altri audiolibri nel nostro catalogo esteso, ad un prezzo scontato del 30%
Ottieni accesso illimitato a una raccolta di oltre migliaia di audiolibri e podcast originali.
Nessun impegno. Cancella in qualsiasi momento e conserva tutti i titoli acquistati.
Acquista ora a 17,95 €
-
Letto da:
-
Rafe Beckley
A proposito di questo titolo
The scramble to create superhuman AI has put us on the path to extinction—but it’s not too late to change course, as two of the field’s earliest researchers explain in this clarion call for humanity.
In 2023, hundreds of AI luminaries signed an open letter warning that artificial intelligence poses a serious risk of human extinction. Since then, the AI race has only intensified. Companies and countries are rushing to build machines that will be smarter than any person. And the world is devastatingly unprepared for what would come next.
For decades, two signatories of that letter—Eliezer Yudkowsky and Nate Soares—have studied how smarter-than-human intelligences will think, behave, and pursue their objectives. Their research says that sufficiently smart AIs will develop goals of their own that put them in conflict with us—and that if it comes to conflict, an artificial superintelligence would crush us. The contest wouldn’t even be close.
How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all? In this urgent book, Yudkowsky and Soares walk through the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive.
The world is racing to build something truly new under the sun. And if anyone builds it, everyone dies.
“The best no-nonsense, simple explanation of the AI risk problem I've ever read.”—Yishan Wong, Former CEO of Reddit
Recensioni della critica
“The most important book of the decade. This captivating page-turner, from two of today’s clearest thinkers, reveals that the competition to build smarter-than-human machines isn’t an arms race but a suicide race, fueled by wishful thinking."—Max Tegmark, author of Life 3.0: Being Human in the Age of AI
“If Anyone Builds It, Everyone Dies may prove to be the most important book of our time. Yudkowsky and Soares believe we are nowhere near ready to make the transition to superintelligence safely, leaving us on the fast track to extinction. Through the use of parables and crystal-clear explainers, they convey their reasoning, in an urgent plea for us to save ourselves while we still can.”—Tim Urban, cofounder, Wait But Why
“The most important book I’ve read for years: I want to bring it to every political and corporate leader in the world and stand over them until they’ve read it. Yudkowsky and Soares, who have studied AI and its possible trajectories for decades, sound a loud trumpet call to humanity to awaken us as we sleepwalk into disaster.”—Stephen Fry
“The best no-nonsense, simple explanation of the AI risk problem I've ever read.”—Yishan Wong, former CEO of Reddit
“Soares and Yudkowsky lay out, in plain and easy-to-follow terms, why our current path toward ever-more-powerful AIs is extremely dangerous.”—Emmett Shear, former interim CEO of OpenAI
“Everyone should read this book. There’s a 70% chance that you—yes, you reading this right now—will one day grudgingly admit that we all should have listened to Yudkowsky and Soares when we still had the chance."—Daniel Kokotajlo, AI Futures Project
"A compelling introduction to the world's most important topic. Artificial general intelligence could be just a few years away. This is one of the few books that takes the implications seriously, published right as the danger level begins to spike."—Scott Alexander, founder, Astral Codex Ten
“Claims about the risks of AI are often dismissed as advertising, but this book disproves it. Yudkowsky and Soares are not from the AI industry, and have been writing about these risks since before it existed in its present form. Read their disturbing book and tell us what they get wrong.”—Huw Price, Bertrand Russell Professor Emeritus, Trinity College, Cambridge
Ancora nessuna recensione