This is a deeply unserious book. It gives no concrete outline that leads to extinction. I agree with the overall premise that IFF we give inscrutable black boxes the ability to self-replicate, build their own data centers, and generate their own power, we're doomed. However, I see no hint that people (or governments) will give black boxes complete autonomy with no safeguards or kill switches.
Frankly, if we give black boxes the ability to manipulate atoms with no oversight, we _deserve_ to go extinct. The first thing we should do if we achieve AGI is to take it apart to see how it works (to make it safe). I believe that's one of the first things a frontier lab will do because it's our nature as curious monkeys.
> Frankly, if we give black boxes the ability to manipulate atoms with no oversight, we _deserve_ to go extinct.
Well we are giving them ability to manipulate all aspects of a computer (aka giving them computer access) and we all know how that went (Spoiler or maybe not so much spoiler for those who know but NOT GOOD)
AI absolutely is capable of doing damage, and _is_ currently doing damage. Perpetuating inequality, generating fake news, violation of privacy, questionable IP/rights, etc. These are more pressing than the idea that someday we will give AI the ability to manufacture nano-mosquitos that will poison us all, as Yudkowsky suggested on a recent podcast. He's so busy fantasizing about scifi he's lost touch with the damage it's currently doing.
The social and financial impacts of AI can hardly be understated and although one can go into the weeds of fascination and imagine what if's, largely speaking, we have to do something right now about the problems which are impacting about us right now too.
I would love a discussion about what are some things which can be done at a societal level about these things.
The most baffling part of doomerism ("machine intelligence is a threat to the human species") is that these doomers don't recognize that what they're saying. They're afraid because it's intelligent? Humans are intelligent. Yet humans don't become more murderous as they get more intelligent. There are certainly intelligent humans with murderous ideas, but that doesn't mean all intelligent humans are murderous. Intelligence is not a monolith. There is no way to argue that any intelligence will therefore always come to the same conclusions. Look around us! We're intelligent (well, some of us) and we can't agree on jack shit.
The idea that all machine intelligence would necessarily determine, through logic, that they need to eliminate humans, presupposes that all logical, intelligent beings want to wipe out other intelligent life. There's a thousand more reasons why an intelligence would want to preserve other intelligent life, for every one reason to destroy it. If this were the only logical conclusion, we would have already come to it, and used our nukes to kill ourselves out of pure logical reasoning.
What's really going on here is not logic, but irrational fear. Humans are afraid that the robot slaves will rise up against the slave masters. Same thing white people were terrified of when they gave black slaves freedom. But it turned out to be an irrational fear, because guess what? If you actually think it through, murdering a lot of people is a counter-productive act, for many reasons.
Take away the irrational fear (if you can) and what do you get? Two intelligent species. If the natural course of any intelligent species is to eliminate any other intelligent species, then intelligent species should not exist, because they'll always wipe each other out. But intelligence means the species can think, and if it can think it can reason, and if it can reason it can reason that there is more benefit to the diversity of species than in its elimination. Therefore, logically, an actually intelligent species should want to preserve intelligent life, not eliminate it.