Whether or not to preemptively ban killer robots will be a debate point at a United Nations meeting next week. And yes, “killer robots” sounds like a sci-fi cliche. But this is the reality of the future of warfare, and this debate is unlikely to do much other than highlight how sinister military tech may become.
The UN will meet in Geneva on April 13 to discuss the pros and cons of creating robots capable of autonomously annihilating human life. The United States and Israel are already using defence systems that respond to attacks by firing weapons automatically, and with the quick rise of weaponised drones in the past few years, countries deploying lethal automated machines soon is a very real possibility.
Real and terrifying. Which is why Human Rights Watch and Harvard Law School recently released a report focusing heavily on the cons of such a development, titled Mind the Gap: The Lack of Accountability in Killer Robots.
The report emphasises that developing intelligent robots that could attack humans autonomously would essentially nullify accountability for murder and create war zones devoid of consequences. After all, if a soldier screws up and kills the wrong person, they have to deal with the ramifications. If a robot malfunctions or doesn’t recognise civilians, children, and friendly fire, the blame game gets murky.
“They would thus challenge longstanding notions of the role of arms in armed conflict, and for some legal analyses, they would be more akin to a human soldier than to an inanimate weapon. On the other hand, fully autonomous weapons would fall far short of being human.”
HRW is asking the UN to encourage nations to sign treaties prohibiting the development of killer robots, arguing accountability is crucial to stopping inhumane acts of war.
No matter what kind of lip service goes down at the UN meeting, it’s unlikely that the world’s military superpowers will easily accept a ban like this. Last year, a similar debate was had, and guess what? No ban resulted.
Even if there is a UN ban, that’s not exactly a guarantee countries will actually play along as promised. Unmanned weapons have been hugely popular in the Obama Administration. Taking things a step further and removing humans from the “loop” — making weaponised robots that don’t need a human to first give them the OK to shoot — is an appealing prospect for countries more concerned with minimising risk to their troops than, you know, not sending weaponised robots incapable of recognising war crimes into the world.
Saying killer robots have potential for enormous harm is a ridiculous understatement, but there are many reasons militaries around the globe want them. They offer a vision of a future where soldiers are not traumatised by committing acts of violence. Never mind that acts of violence will still occur, and perhaps with more collateral damage. This will not be a quickly resolved debate.
Picture: From the cover of the Department of Defence’s Unmanned Systems Roadmap FY 2011-2036, published in 2011