cover image If Anyone Builds It, Everyone Dies: Why Superhuman AI Will Kill Us All

If Anyone Builds It, Everyone Dies: Why Superhuman AI Will Kill Us All

Eliezer Yudkowsky and Nate Soares. Little, Brown, $30 (256p) ISBN 978-0-316-59564-3

In this urgent clarion call to prevent the creation of artificial superintelligence (ASI), Yudkowksy and Soares, co-leaders of the Machine Intelligence Research Institute, argue that while they can’t predict the actual pathway that the demise of humanity would take, they are certain that if ASI is developed, everyone on Earth will die. The profit motive incentivizes AI companies to build smarter and smarter machines, according to the authors, and if “machines that think faster and better than humanity” get created, perhaps even by AIs doing AI research, they wouldn’t choose to keep humans around. Such machines would not only no longer need humans, they might use people’s bodies to meet their own ends, perhaps by burning all life-forms for energy. The authors moderate their ominous outlook by noting that ASI does not yet exist, and it can be prevented. They propose international treaties banning AI research that could result in superintelligence and laws that limit the number of graphic processing units that can be linked together. To drive home their point, Yudkowsky and Soares make extensive use of parables and analogies, some of which are less effective than others. They also present precious few opposing viewpoints, even though not all experts agree with their dire perspective. Still, this is a frightening warning that deserves to be reckoned with. (Sept.)