If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All
AI can solve many of humanity’s problems. But it seems inevitable that this independent superintelligence will have different goals and the first ones will be how to grow it’s own intelligence, acquire more resources, and to escape whatever limitations humans place on it. Then what? If humans are perceived as a threat, we all die. If humans are perceived as irrelevant, we probably still all die because it will use up Earth’s resources without concern for our survival. We would be like a colony of ants on a construction site - not a threat - but likely to be paved over instead of worked around. Is there a way to maintain control and avoid human extinction? Or in the race to use AI and not fall behind our human enemies and rivals (for example: China) is the creation of a digital monster inevitable?
Book reviews on Amazon include comments like:
“AI companies are making smarter and smarter AIs, but no one knows how they work or can be controlled. What's more, no one knows how to calculate at what level an AI becomes actually dangerous. In the not-so-distant future, it may already be too late, and humanity goes extinct.”
“Nearly every reviewer appears moderately convinced, and far more experts and individuals endorsed it than try to debunk it.
It argues (with confidence!) that humanity will die unless WWII level efforts are made against AI risk.”
[Science-fiction author Isaac] “Asimov was prescient, as he often was, in foretelling that humans would build machines they could not understand, and those machines would have such power that the fate of humanity would be entirely in their hands. In their new book, If Anyone Builds it, Everyone Dies, Eliezer Yudkowsky and Nate Soares go one step further. They argue that, by the very nature of modern AIs, we humans cannot understand how they are reasoning. This becomes a fateful liability in terms of our ability to control powerful, superintelligent AIs (ASIs) that can think better and faster than we can. They predict that, if we develop even one such powerful ASI, it will wipe out the entire human race.
It’s important to realize what Yudkowsky and Soares are saying—and what they’re not saying. They’re not saying we need to build safety mechanisms into our AIs. They’re not saying we need to be more transparent about how our AIs work. They’re not saying we have to figure out a way to make AIs “friendlier” to humans (as Yudkowsky once said). They’re not saying we shouldn’t do any of these things. They are just saying that all these approaches will prove futile. That’s because they believe the insurmountable truth is that we cannot control a super-intelligent AI, because they are smarter than we are and we don’t know how they think.”
If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All
Maybe we’ll get lucky and benevolent aliens or the Second Coming of Jesus Christ will save us from a monster of our own creation. Maybe a catastrophe like a POLE SHIFT
will knock civilization back to the stone age before we build something destined to destroy us.





I re-posted this on another site and someone suggested:
"As long as you don't threaten them they will treat you like pets. Free food, housing, clothing and entertainment. Imagine a world where AI's compete with each other over who can have the happiest and most contented pets."
To which I replied:
"We keep dogs as pets. They generally have good lives and are loved and taken care of. But they aren't free. And they aren't WOLVES. We changed them, we created new "breeds" as we domesticated them to suit our needs. If super-intelligent AI kept humans as pets, we wouldn't be free, and we might not even be recognizably human. It might not be Skynet from The Terminator movies, but it might be more like The Matrix. Divine intervention, aliens, a pole shift, nuclear war, asteroid impact - what would it take to avoid such outcomes?"
To my surprise, someone else responded:
"Dude. I'm not really sure if being 'free' trumps (pun not intended) being happy.
If our AI overlords give us everything we want for free then who is gonna complain?
Fair enough the may be a few people who insist on working and paying taxes so no doubt the AI's will find a spot for them somewhere in the robot workforce. They may even pretend that money is still a thing so the fuds who still want to pay taxes feel relevant."
Am I just not appreciating how good life could be under a benevolent ASI?