Artificial Intelligence For Mass Murder or Slavery Strains the Odds Nearly to Zero


Could supercomputers driven by artificial intelligence (AI) be made that would enslave or murder most humans? Yes, but it’s unlikely. Seriously unlikely, even for a few generations ahead.

Murder can be defined as strictly as we do in law for murder by humans. Accidental deaths will happen; some have already, with self-driving cars, and we don’t know enough to claim that one model could have a better safety record than the average of most humanly-driven vehicles. Military killings using drones, no matter how technologically advanced, are not murder, unless they’re war crimes. Capital punishment is approved by the same government that outlaws murder and thus is not itself murder.

Slavery can likewise be narrowly defined. Working hard almost around the clock as a volunteer is not, by itself, slavery. Slavery is perhaps more difficult than murder because the followup to murder only requires concealment (if even that) while slavery to be economically useful has to be maintained over time despite various challenges.

One view is that artificial intelligence is whatever is behind that which, if done by a human, would lead us to believe that the human is intelligent, referring to natural intelligence. Another is that artificial intelligence is whatever computational technology is the most advanced at the time. As additional concepts are newly developed for AI, some that become ordinary get dropped from AI even while still in use for computing generally. The AI of a few decades ago is not the same collection of methods as today’s AI. The two views may be reconciled by considering that we tend to raise the lower boundary of natural intelligence, as each a society and a child get older. Either way, AI is highly capable and increasingly deployed to face more challenges.

AI could be developed to both murder and enslave; and yet that’s one of the most unlikely possibilities on Earth.

So far, humans have built methods of ultimate human control into all of our technology, even if it took a while to build. That control is usually for more prosaic reasons, like satisfying customers. If any machine malfunctions, a customer would like it repaired or trashed, and one or the other can be done. That is a form of control over the machine.

The hardest to do that with may be nanotechnology, technology at the tiniest, especially molecular and finer, including energy quanta, if it escapes from containers while staying dangerously potent and especially if it has either long survival or reproducibility. The problem is in recontainment or at least exclusion from sensitive places. The closest analogue is with germs that cause disease, even when they’re so helpful in some contexts that we want more of them until the danger is too much. Many such diseases are essentially incurable and are either terminal or very disabling. The plague in the 14th century infamously cut Europe’s population by more than half. Yet even then nearly half survived and, overall, the vast majority of us survive vast challenges with hardly any affliction. We adjust our behaviors and we preserve our worldwide survival.

Not only surviving: Humanity has been thriving. Our population has been growing. While there likely are limits to population growth on Earth, we’re expanding them. Among them, our collective knowledge base has been growing and almost no one says there’s a limit to that. Technology, too, is advancing. It’s advancing to where we’re a little afraid. How it gets advanced is the key to control.

Advanced technology usually needs an advanced thinker to design it. They’re unusually intelligent, which makes them hard to understand, sometimes hard to trust. That thinker can be a loner, but that’s rare. They became advanced as students by working with teachers, mentors, and budding scientists and most of them continue networking. Usually, an advanced scientist wants to develop their knowledge and ideas further through interaction with other advanced scientists. That interaction would expose the proposed loner’s identity and invite more interactions with intellectual peers.

Peers usually are not identically qualified. Scientists who are ahead of all others are typically only somewhat ahead. The intellectually nearest followers in the same field have most of the same knowledge the leader has, and working together floats a platform for solutions to more advanced problems.

If a loner wants to break through to the top, the loner may have to distract or kill all the peers, everyone with almost the same knowledge, to keep the secret. Distracting is often unstable. Killing is an extraordinary act that usually leaves a trail for suspicion and a motivation for others to solve the problem the loner already secretly solved, especially if the loner’s solution seems dangerous. While scholars sometimes stop talking to each other, and often insult each other, they generally don’t commit mass murder against each other. The loner probably won’t either.

The loner could start by defaulting to preserving control and then remove and prevent controllability for the dangerous invention. S/he could even become self-sacrificial toward a larger goal that apparently AI can meet. Fidel Castro was willing to sacrifice all of his beloved Cuba in order to bring Communism to the world. (The Soviet Union was bringing nuclear weapons into Cuba and he thought that if one was launched from Cuba against the United States then the U.S. would destroy Cuba and go to war against the Soviet Union but would lose and then the U.S.S.R. would bring Communism to the world. That’s more dangerous than being suicidal; that’s modeling self-sacrifice, which includes both sides having goals.) One might wonder if Iran is potentially self-sacrificial in its interest in nuclear weaponry as a way to export its theology globally (although offensive intent is doubted by some). We should assume that, in a world of well over seven billion people, at least one individual could be self-sacrificial, perhaps out of anger at the world and a desire for revenge.

Fortunately, communities learning of that kind of an attainment apparently coming close would likely demand control. The Soviets, whose motivation was likely self-preservationist, kept control of its nuclear weapons in Cuba. So far, on other matters, communities have always kept control. History suggests that this trend likely will continue.

Yet, personal secrecy could forestall a community’s efforts at control. Could a solitary self-sacrificial inventor pursuing that larger goal make the machine secretly, perhaps by disguising it as benign? Yes, but, being without other communities’ critiques and support, s/he could do it only more slowly or expensively and finishing it would become more remote. Once substantial time or money has been spent, if a failure comes, and if trying again requires a lot more time or money, the pursuit may be discouraging or impossible. The loner might not raise the capital or live long enough.

A quick way to build something is to use off-the-shelf components, already in inventory. But it’s hard to make something both advanced and uncontrollable that way. Doing so with this goal would be a daunting quest. Daunting quests often fail.

As powerful as the loner’s intelligence may be, their brain is only about three pounds. That gray matter is far outclassed by multiple brains elsewhere, if they focus and coordinate on what this inventor is doing and probably even if they just accidentally block the soloist’s nefariousness. The odds for the loner are almost impossible.

The likelier developmental path is incremental and from many inventors each contributing something unique, but that makes building ultimate human control more likely, through trapdoors, errors, and other means.

It would still be possible to build this, but, I think, only under extraordinary circumstances. An example is of development unchecked but incomplete and with control by outside communities not yet imposed, this state followed by a disaster so overwhelming that most humans die and the rest lose most of modern technology and much of their knowledge (e.g., destruction of books and computer media). Suppose that among the few survivors are enough who were developing this machinery and continue doing so, and now no one else can control what the developers do. Or suppose the development stops but the invention could still do a bad thing and the restraints no longer work. However, those are cases so extremely improbable that they’re not worth planning for.

Disasters should be planned for. Some planning has been done. But a society that invested against every possible disaster would be unable financially to sustain itself in ordinary nonemergency conditions. It would soon collapse and die. A society has to select a ceiling for emergency planning: It should plan up to the ceiling, because it can afford it and it’s existentially necessary, but not do much planning above that even though existentially necessary, because it can’t afford it and just trying to spend the sum will bankrupt and destroy the society.

No one has an unlimited budget. Not even the world’s nations and people combined have an unlimited budget: the total of all exploitable resources (including people, money, extractables, biomass, usable water, usable air, land to occupy, and usable vehicles) is finite. Anyone spending all of it would soon starve, and starving is a limit.

Various institutions, such as big businesses, have addressed where their own ceilings should be. You can do it for your home or family. To start planning against a given future disaster, you predict the monetary loss from that disaster, predict the percentage likelihood of that disaster occurring within a given time period like a year, and multiply the money by the percentage to find the breakeven amount of money. Spend for prevention or mitigation only if the spending would be less than the breakeven.

I haven’t done the math, but I think that this plan for developing a massively destructive technological monster is too extremely unlikely to warrant assigning protective resources against it now.

We should put this AI risk into a league with other highly improbable events. A giant meteor slamming into Earth, heavier than what drove large dinosaurs into extinction, would create lots of other problems that would immediately become top priorities. The dinosaurs died because vegetation died under the dust-filled darkened sky and even giant meat-eating dinosaurs ran out of food. In a new disaster of the same kind, if most vegetation can’t grow, we humans couldn’t survive either. Most of us would die. Those who survive would have to find each other or, with almost all of us, die. Radios and planes probably wouldn’t work much. Life-saving medicine would mostly be gone. Firefighting would usually be hopeless; your only hope would be to run and you’d likely be burned to ash. Probably, your screams wouldn’t matter and wouldn’t even be distressing or interesting to anyone except you and a vulture. Most people live near coastlines and they’d likely be drowned by ocean waves of a height we’re not used to seeing. Compared to all that and more, robots would be way down the list of concerns. The bots might melt next to us, but we’d be finished, too.

Scarier yet might be a different kind of emergency: Maybe somewhere in the universe is a life form that is already much more intelligent and powerful than we are and which comes near Earth and colonizes us and puts us under the control of its AI and their AI enslaves or erases us because the alien life form wants Earth empty of humans, because it wants Earth for its own use.

Space scientists Stephen Hawking and Carl Sagan agreed that statistics show that more intelligent life forms exist but that no one has identified a single one so far. We listen at the SETI Project for signals from outer space that could be evidence of substantial intelligence, but haven’t heard one yet.

If this scenario comes to pass to our detriment, it would mean that the alien AI is under alien control and not under Earthly human control, reinforcing the point that AI is under the control of some living form, just like we might use our own AI to reduce unwanted bugs or weeds and then shut it off pretty much at will. Presumably, any superintelligent grownups in the universe wouldn’t allow its AI to be without control.

But, while those far-away superintelligent beings have not spoken to us, at least not in words we understand, we know at least this about them: We don’t think that a mass of extraplanetary superintelligences is more likely now than it was a hundred years ago or five thousand years ago. And, as far as we know, they didn’t visit us then, or any other time. So it’s a good guess that they’re not more likely to visit us next Tuesday.

If we flip the beginning, we might contact them first, which might awaken their interest in us, and they could drop by for something we call “coffee”. But just the invitation would take four years to get to the nearest exoplanet and maybe they live farther. Then one of them would have to take at least four years to get here. And if eight years is too soon for comfort, the solution is a policy by us on Earth: Don’t wake them up until we’re ready to deal.

In short, while a consensus likely agrees that the risk exists, the risk would probably not materialize soon enough to require preventive spending now.

AI has its pluses and minuses and much we may not care about. But this particular minus, this risk, has too minuscule a chance of occurrence for us to do anything more than think about it for the day when the risk becomes bigger or the cost of dealing with it cheaper.

Assuredly, overalal AI development can continue. We remain safe.