Why Banning Killer AI Is Easier Said Than Done

Why Banning Killer AI Is Easier Said Than Done

As we head deeper into the 21st century, the prospect of getting robots to do the dirty business of killing gets closer with each passing day. In Max Tegmark’s new book, Life 3.0: Being Human in the Age of Artificial Intelligence, the MIT physicist and founder of the Future of Life Institute contemplates this seemingly scifi possibility, weighing the potential benefits of autonomous machines in warfare with the tremendous risks. The ultimate challenge, he says, will be convincing world powers to pass on this game-changing technology.

An unmanned US Predator drone flies over Kandahar Air Field, southern Afghanistan, on a moon-lit night. (Image: AP)

AI has the potential to transform virtually every aspect of our existence, but it’s not immediately clear if we be able to fully control this awesome power. Radical advances in AI could conceivably result in a utopian paradise, or a techno-hell worthy of a James Cameron movie script. Among Tegmark’s many concerns is the prospect of autonomous killing machines, where humans are kept “out of the loop” when the time comes for a robot to kill an enemy combatant. As with so many things, the devil is in the details, and such a technology could introduce a host of unanticipated complications and risks — some of them of an existential nature.

Gizmodo is excited to share an exclusive excerpt from Life 3.0, in which Tegmark discusses the pros and cons of outsourcing life-and-death decision making to a machine, a recent initiative to institute an international ban on autonomous killing machines, and why it will be so difficult for the United States to relinquish this prospective technology.


From Chapter 3: The Near Future: Breakthroughs, Bugs, Laws, Weapons and Jobs

Since time immemorial, humanity has suffered from famine, disease and war. In the future, AI may help reduce famine and disease, but how about war?

Why Banning Killer AI Is Easier Said Than Done

Some argue that nuclear weapons deter war between the countries that own them because they’re so horrifying, so how about letting all nations build even more horrifying AI-based weapons in the hope of ending all war forever? If you’re unpersuaded by that argument and believe that future wars are inevitable, how about using AI to make these wars more humane? If wars consist merely of machines fighting machines, then no human soldiers or civilians need get killed. Moreover, future AI-powered drones and other autonomous weapon systems (AWS; also known by their opponents as “killer robots”) can hopefully be made more fair and rational than human soldiers: equipped with superhuman sensors and unafraid of getting killed, they might remain cool, calculating and level-headed even in the heat of battle, and be less likely to accidentally kill civilians.

A Human in the Loop

But what if the automated system are buggy, confusing or don’t behave as expected? The U.S. Phalanx system for Aegis-class cruisers automatically detects, tracks and attacks threats such as anti-ship missiles and aircraft. The USS Vincennes was a guided missile cruiser nicknamed Robocruiser in reference to its Aegis system, and on July 3, 1988, in the midst of a skirmish with Iranian gunboats during the Iran-Iraq war, its radar system warned of an incoming aircraft. Captain William Rodgers III inferred that they were being attacked by a diving Iranian F-14 fighter jet and gave the Aegis system approval to fire.

What he didn’t realise at the time was that they shot down Iran Air Flight 655, a civilian Iranian passenger jet, killing all 290 people on board and causing international outrage. Subsequent investigation implicated a confusing user interface that didn’t automatically show which dots on the radar screen were civilian planes (Flight 655 followed its regular daily flight path and had its civilian aircraft transponder on) or which dots were descending (as for an attack) vs. ascending (as Flight 655 was doing after takeoff from Tehran). Instead, when the automated system was queried for information about the mysterious aircraft, it reported “descending” because that was the status of a different aircraft to which it had confusingly reassigned a number used by the navy to track planes: what was descending was instead a U.S. surface combat air patrol plane operating far away in the Gulf of Oman.

In this example, there was a human in the loop making the final decision, who under time pressure placed too much trust in what the automated system told him. So far, according to defence officials around the world, all deployed weapons systems have a human in the loop, with the exception of low-tech booby traps such as land mines. But development is now under way of truly autonomous weapons that select and attack targets entirely on their own. It’s militarily tempting to take all humans out of the loop to gain speed: in a dog- fight between a fully autonomous drone that can respond instantly and a drone reacting more sluggishly because it’s remote-controlled by a human halfway around the world, which one do you think would win?

Why Banning Killer AI Is Easier Said Than Done
The Phalanx CIWS close-in weapons system. (Image: US Navy)

The Phalanx CIWS close-in weapons system. (Image: US Navy)

However, there have been close calls where we were extremely lucky that there was a human in the loop. On October 27, 1962, during the Cuban Missile Crisis, eleven U.S. Navy destroyers and the aircraft carrier USS Randolph had cornered the Soviet submarine B-59 near Cuba, in international waters outside the U.S. “quarantine” area. What they didn’t know was that the temperature onboard had risen past 45°C (113°F) because the submarine’s batteries were running out and the air-conditioning had stopped. On the verge of carbon dioxide poisoning, many crew members had fainted. The crew had had no contact with Moscow for days and didn’t know whether World War III had already begun. Then the Americans started drop- ping small depth charges, which they had, unbeknownst to the crew, told Moscow were merely meant to force the sub to surface and leave.

“We thought — that’s it — the end,” crew member V. P. Orlov recalled. “It felt like you were sitting in a metal barrel, which somebody is constantly blasting with a sledgehammer.”

What the Americans also didn’t know was that the B-59 crew had a nuclear torpedo that they were authorised to launch without clearing it with Moscow. Indeed, Captain Savitski decided to launch the nuclear torpedo. Valentin Grigorievich, the torpedo officer, exclaimed: “We will die, but we will sink them all — we will not disgrace our navy!” Fortunately, the decision to launch had to be authorised by three officers on board, and one of them, Vasili Arkhipov, said no. It’s sobering that very few have heard of Arkhipov, although his decision may have averted World War III and been the single most valuable contribution to humanity in modern history. It’s also sobering to contemplate what might have happened had B-59 been an autonomous AI-controlled submarine with no humans in the loop.

Two decades later, on September 9, 1983, tensions were again high between the superpowers: the Soviet Union had recently been called an “evil empire” by U.S. president Ronald Reagan, and just the previous week, it had shot down a Korean Airlines passenger plane that strayed into its airspace, killing 269 people — including a U.S. congressman. Now an automated Soviet early-warning system reported that the United States had launched five land-based nuclear missiles at the Soviet Union, leaving Officer Stanislav Petrov merely minutes to decide whether this was a false alarm. The satellite was found to be operating properly, so following protocol would have led him to report an incoming nuclear attack. Instead, he trusted his gut instinct, figuring that the United States was unlikely to attack with only five missiles, and reported to his commanders that it was a false alarm without knowing this to be true. It later became clear that a satellite had mistaken the Sun’s reflections off cloud tops for flames from rocket engines. I wonder what would have happened if Petrov had been replaced by an AI system that properly followed proper protocol.

Once mass-produced, small AI-powered killer drones are likely to cost little more than a smartphone. Whether it’s a terrorist wanting to assassinate a politician or a jilted lover seeking revenge on his ex-girlfriend, all they need to do is upload their target’s photo and address into the killer drone: it can then fly to the destination, identify and eliminate the person, and self-destruct to ensure that nobody knows who was responsible. Alternatively, for those bent on ethnic cleansing, it can easily be programmed to kill only people with a certain skin colour or ethnicity. The Berkeley AI professor Stuart Russell envisions that the smarter such weapons get, the less material, firepower and money will be needed per kill.

For example, he fears bumblebee-sized drones that kill cheaply using minimal explosive power by shooting people in the eye, which is soft enough to allow even a small projectile to continue into the brain. Or they might latch on to the head with metal claws and then penetrate the skull with a tiny shaped charge. If a million such killer drones can be dispatched from the back of a single truck, then one has a horrifying weapon of mass destruction of a whole new kind: one that can selectively kill only a prescribed category of people, leaving everybody and everything else unscathed.

A common counterargument is that we can eliminate such concerns by making killer robots ethical — for example, so that they will only kill enemy soldiers. But if we worry about enforcing a ban, then how would it be easier to enforce a requirement that enemy autonomous weapons be 100% ethical than to enforce that they aren’t produced in the first place? And can one consistently claim that the well-trained soldiers of civilized nations are so bad at following the rules of war that robots can do better, while at the same time claiming that rogue nations, dictators and terrorist groups are so good at following the rules of war that they will never choose to deploy robots in ways that violate these rules? For this and other reasons, thousands of AI researchers have come out in support of an international treating restricting the development and use of lethal autonomous weapons.

Cyberwar

Another interesting military aspect of AI is that it may let you attack your enemy even without building any weapons of your own, through cyberwarfare. As a small prelude to what the future may bring, the Stuxnet worm, widely attributed to the U.S. and Israeli governments, infected fast-spinning centrifuges in Iran’s nuclear-enrichment program and caused them to tear themselves apart. The more automated society gets and the more powerful the attacking AI becomes, the more devastating cyberwarfare can be. If you can hack and crash your enemy’s self-driving cars, auto-piloted planes, nuclear reactors, industrial robots, communication systems, financial systems and power grids, then you can effectively crash his economy and cripple his defences. If you can hack some of his weapons systems as well, even better.

Without a doubt, there are some spectacular near-term opportunities for AI to benefit humanity — if we manage to make it robust and unhackable. Although AI itself can be used to make AI systems more robust, thereby aiding the cyberwar defence, AI can clearly aid the offence as well. Ensuring that the defence prevails must be one of the most crucial short-term goals for AI development — otherwise all the awesome technology we build can be turned against us!

From the book LIFE 3.0 by Max Tegmark, © 2017 by Max Tegmark. Published by arrangement with Alfred A. Knopf, an imprint of The Knopf Doubleday Publishing Group, a division of Penguin Random House LLC.

Max Tegmark’s Life 3.0: Being Human in the Age of Artificial Intelligence, was published on August 29, 2017.


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.