Will Artificial Intelligence Force Worldwide Poverty Till We Drop Dead?

Artificial intelligence won’t drive most of us into the poorhouse.

The fear, with some reason, is that AI will replace human workers, replace even executives, so fewer people do the work that keeps us fed and sheltered, able to raise our children, and meeting our other needs and wants. The fear is that a few of us will do fine but many of us will starve, freeze, have to expose babies unto death, and, instead of enjoying a few pleasures, just walk around in a daze or lie down in comas till we expire, and in the meantime families, communities, and nations will be broken and engaging in strife and war. The reasoning is historical: We’ve been there. Printing presses likely put many scribes out of business, even some very skilled scribes with beautiful calligraphy. Many miners can’t make their livings with pickaxes any longer; they need computer skills and heavy machinery and fewer miners are needed to dig up the same quantity of whatever’s down there, so some burly guys are out of work. Often, over the centuries, we recoiled at some new thing that was crippling most people at the time. There likely were demands to get rid of it. Sometimes, we got rid of it. The Luddites went around doing just that, to save contemporaries’ jobs. But many times we started by beginning to get rid of the offensive thing and then gradually reformed our views and embraced the thing. We figured out how to make the thing work for us. We did that because the opposite of galloping technology crushing us is spearing deer outside our caves and drinking the same water hippos bathe in. So we mediate progress and send the Luddites away.

Countervailing forces will save us. A fundamental fact of human behavior is that almost no one agrees to die simply because of a shortage of economic resources. If they do, they tend to be people who have some additional reason for wanting to die, such as being old and hopeless that they have a meaningful future or being of any age and suffering intense embarrassment from financial loss that frightens their families. The rest, the people who have no reason to die except that resources are scarce, will stay alive, even when that requires committing crimes. If they’re not yet desperate enough to do that, but they’re worried and angry, they’ll likely exert political control over whoever they think is depriving them. This is what has happened for centuries. Sometimes people fighting back lose, but if AI is in the hands of relatively few people, the few even with AI will lose to the many even without AI, because nothing that is seen as a major threat to basic survival will be allowed to grow to where it becomes essentially invincible.

Also countervailing is that the only way for many major advances to be socially, politically, and economically sustainable is for its proponents to share the benefits, voluntarily or otherwise. That is how cars displaced horses and wheels displaced walking. Someone with newfangled cars or wheels shared them (maybe under duress but shared nonetheless), and then other people wanted more and made them. Not only must final benefits be shared, but the means for developing them must also be shared, otherwise there’s no reason for anyone else to invest in them. If transport was done with wheels being hidden even from warriors and fire, the explanation would have been “magic” and investment would have gone to magicians, who couldn’t have produced, with the failure resulting in a slowdown in real development. This is just as true for AI. Since AI must be shared, how it works will be studied and understood by more people. The more people understand its inner workings, the more people there will be who can control it, and that usually means that more people will control it.

Is there time? If I were writing a movie script, I’d likely say “no” and give you some artificial tension so you’d buy a pair of tickets. But, based on real experience, yes, there is time. AI taking away virtually all of our jobs will be gradual enough for people to anticipate what’s ahead. We already have our early-warning antennae up.

But suppose we ignore every sign ever and something extreme happens. Two extremes come to mind. In both, AI could produce fabulous wealth. If it didn’t, AI would be scrapped, so wealth should be assumed to be on its way. What would vary is where the wealth will go.

One case is if wealth is limited to the very few people who are close to AI with almost the entire rest of the world reduced to abject poverty. That would be the result if the people far from AI have nothing left to do and can’t make ends meet. Our ancestors have been threatened with that before. Sometimes it happened, but relatively briefly or narrowly. If it wiped out an entire regional population, it did not wipe out all of humanity and some survived and reproduced. We survived back when there were fewer of us and survival was harder. We were cuisine to lions. Few caves offered hospital care. You would just die. And if wipeouts didn’t happen then, then they’re even harder now, when we’re more numerous, more spread out around the planet, and more skillful. We recovered often enough that we’re here today, more than seven billion of us, a record.

We recovered because it is impossible to keep starving people idle for long. If someone doesn’t hire them and instead leaves them to die in peace, the hungry out-of-work crowds start asking, pestering, throwing rascals out of office, and leading revolutions. They have no intention of dying without a fight and, in this scenario, they outnumber the affluent few. A wealthy minority can spend to repress, but if the numbers are too unequal they can’t repress enough. Even if the people with growling stomachs have inarticulate arguments and lousy weapons, they’ll make do.

And they may not need the wealthy ones. They can build their own economy. They’re alive and plan to stay that way, therefore they’ll consume. Therefore, anyone producing anything useful at any affordable price will have a market, even if they get paid only in unskilled labor. They’ll find a use for the unskilled labor, the unskilled laborers will eat, and most of the unskilled laborers won’t stay unskilled for long.

The poverty-stricken will likely consolidate their power, generally leaving the AI elite out in the cold until both sides explore where cooperation would help, hopefully without first waging a war to persuade both sides to start cooperating.

The worst version would be where poverty is so extreme that fighting back is impossible, but that’s generally only in the last day or so of starvation, hypothermia, terminal thirst, and deadly disease and there’s almost always awareness of that state approaching in time to resist while merely hungry, cold, thirsty, and sick but still having some ability to turn things around.

The other extreme, less likely but nonetheless possible, is that the wealth could be almost universal but leave almost everyone idle, like if a thousand people can do all the production needed to sustain the world’s population surpassing seven billion for their natural lives. In essence, free money would be handed out to everyone lacking the AI. The concern is not about poverty but that people without any way to advance the AI would become the idle rich, thus idle.

But they won’t be idle for long. People will seek personal advantage, which we’ve been doing for millions of years. Sometimes that’s competitive advantage; sometimes it’s personal comfort; it can be any kind of advantage. That pursuit means that all those billions of people with 16 hours a day of empty leisure time will find something to do which will fill their time.

Then, they’ll try to spread the benefit from it beyond the personal. First,they’ll offer it as gifts to others. Then, they’ll exchange their production for other people’s production, through barter or a medium of exchange. It won’t matter what’s being produced as long as someone else likes it enough. It could be trainloads of oxtail ice cream exchanged for a rocket to Neptune. It won’t matter if those could be produced using AI. People often prefer to use brains and hands and no computers for the personal projects, just like some people today build their own bicycles and bake their own pies. If AI figures out how to paint a landscape people prefer, artists at home will paint even nicer landscapes, AI will try again, and artists will try again. These will be bought and sold. Sellers will be encouraged to make more. If AI wins the landscape battle, artists will make portraits, abstractions, political art, musical CD sleeve art, and graffiti or write poetry. The economy will grow. We’ll be okay.

Society could be destroyed, but this almost certainly won’t happen by any of our hands, not even with AI. A more likely cause is a natural disaster too large and too fast to let us protect ourselves. The sun will eventually die, but that’s probably billions of years in the future and by then we may be able to transport most of us to another solar system.

We’re relatively safe. However, safety usually follows a threat and a warning. The warning will likely inspire bringing resources to bear so that we’ll be safe. So the present discussions of concern about whether AI is going too far are reasonable. We respond by developing yardsticks to measure the threats and our capacities and then strategizing for our safety. The strategy will not end AI, but will preserve our thrival while AI continues contributing while firmly under the human thumb.