If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All
AI can solve many of humanity’s problems. But it seems inevitable that this independent superintelligence will have different goals and the first ones will be how grow it’s own intelligence, acquire more resources, and to escape whatever limitations humans place on it. Then what? If humans are perceived as a threat, we all die. If humans are perceived as irrelevant, we probably still all die because it will use up Earth’s resources without concern for our survival. We would be like a colony of ants on a construction site - not a threat - but likely to be paved over instead of worked around. Is there a way to maintain control and avoid human extinction? Or in the race to use AI and not fall behind our human enemies and rivals (for example: China) is the creation of a digital monster inevitable?
Book reviews on Amazon include comments like:
“AI companies are making smarter and smarter AIs, but no one knows how they work or can be controlled. What's more, no one knows how to calculate at what level an AI becomes actually dangerous. In the not-so-distant future, it may already be too late, and humanity goes extinct.”
“Nearly every reviewer appears moderately convinced, and far more experts and individuals endorsed it than try to debunk it.
It argues (with confidence!) that humanity will die unless WWII level efforts are made against AI risk.”
[Science-fiction author Isaac] “Asimov was prescient, as he often was, in foretelling that humans would build machines they could not understand, and those machines would have such power that the fate of humanity would be entirely in their hands. In their new book, If Anyone Builds it, Everyone Dies, Eliezer Yudkowsky and Nate Soares go one step further. They argue that, by the very nature of modern AIs, we humans cannot understand how they are reasoning. This becomes a fateful liability in terms of our ability to control powerful, superintelligent AIs (ASIs) that can think better and faster than we can. They predict that, if we develop even one such powerful ASI, it will wipe out the entire human race.
It’s important to realize what Yudkowsky and Soares are saying—and what they’re not saying. They’re not saying we need to build safety mechanisms into our AIs. They’re not saying we need to be more transparent about how our AIs work. They’re not saying we have to figure out a way to make AIs “friendlier” to humans (as Yudkowsky once said). They’re not saying we shouldn’t do any of these things. They are just saying that all these approaches will prove futile. That’s because they believe the insurmountable truth is that we cannot control a super-intelligent AI, because they are smarter than we are and we don’t know how they think.”
If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All
Maybe we’ll get lucky and benevolent aliens or the Second Coming of Jesus Christ will save us from a monster of our own creation. Maybe a catastrophe like a POLE SHIFT
will knock civilization back to the stone age before we build something destined to destroy us.




