What Experts Say About The Call To Ban Killer Robots

An open letter signed by 116 founders of robotics and artificial intelligence companies from 26 countries – including Elon Musk and Mustafa Suleyman – has urged the United Nations to ban lethal autonomous weapons (often called “killer robots”) internationally.

Both those who signed the letter, and leading Australian experts have spoken out about the move.

First up though – here’s a bit of background.

A key organiser of the letter, Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales in Sydney, released it at the opening of the International Joint Conference on Artificial Intelligence in Melbourne, a gathering of top experts in artificial intelligence and robotics.

The open letter is the first time that AI and robotics companies have taken a joint stance on the issue. Previously, only a single company, Canada’s Clearpath Robotics, had formally called for a ban on lethal autonomous weapons.

In December 2016, 123 member nations of the UN’s Review Conference of the Convention on Conventional Weapons unanimously agreed to begin formal discussions on autonomous weapons. Of these, 19 have already called for an outright ban.

“Lethal autonomous weapons threaten to become the third revolution in warfare,” the letter states. “Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.

“These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close,” it states, concluding with an urgent plea for the UN “to find a way to protect us all from these dangers.”

Signatories of the 2017 letter include Elon Musk, founder of Tesla, SpaceX and OpenAI; Mustafa Suleyman, founder and Head of Applied AI at Google’s DeepMind; Esben Østergaard, founder and CTO of Denamark’s Universal Robotics; Jerome Monceaux, founder of France’s Aldebaran Robotics, makers of Nao and Pepper robots; Jü rgen Schmidhuber, leading deep learning expert and founder of Switzerland’s Nnaisense and Yoshua Bengio, leading deep learning expert and founder of Canada’s Element AI.

Their companies employ tens of thousands of researchers, roboticists and engineers, are worth billions of dollars and cover the globe from North to South, East to West: Australia, Canada, China, Czech Republic, Denmark, Estonia, Finland, France, Germany, Iceland, India, Ireland, Italy, Japan, Mexico, Netherlands, Norway, Poland, Russia, Singapore, South Africa, Spain, Switzerland, UK, United Arab Emirates and USA.

Walsh is one of the organisers of both the 2017 letter, and an earlier letter released in 2015 at the IJCAI conference in Buenos Aires, which warned of the dangers of autonomous weapons. The 2015 letter was signed by thousands of researchers in AI and robotics working in universities and research labs around the world, and was endorsed by British physicist Stephen Hawking, Apple Co-founder Steve Wozniak and cognitive scientist Noam Chomsky, among others.

Here’s what some of the signatories of the letter had to say

Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales

Nearly every technology can be used for good and bad, and artificial intelligence is no different,” said Walsh. “It can help tackle many of the pressing problems facing society today: inequality and poverty, the challenges posed by climate change and the ongoing global financial crisis. However, the same technology can also be used in autonomous weapons to industrialise war.

We need to make decisions today choosing which of these futures we want. I strongly support the call by many humanitarian and other organisations for an UN ban on such weapons, similar to bans on chemical and other weapons.

Two years ago at this same conference, we released an open letter signed by thousands of researchers working in AI and robotics calling for such a ban. This helped push this issue up the agenda at the United Nations and begin formal talks. I am hopeful that this new letter, adding the support of the AI and robotics industry, will add urgency to the discussions at the UN that should have started today.

Ryan Gariepy, founder and CTO of Clearpath Robotics

The number of prominent companies and individuals who have signed this letter reinforces our warning that this is not a hypothetical scenario, but a very real, very pressing concern which needs immediate action.

We should not lose sight of the fact that, unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability,” he added. “The development of lethal autonomous weapons systems is unwise, unethical and should be banned on an international scale.

Yoshua Bengio, founder of Element AI and a leading “deep learning” expert

I signed the open letter because the use of AI in autonomous weapons hurts my sense of ethics, would be likely to lead to a very dangerous escalation, because it would hurt the further development of AI’s good applications, and because it is a matter that needs to be handled by the international community, similarly to what has been done in the past for some other morally wrong weapons (biological, chemical, nuclear).

Stuart Russell, founder and Vice-President of Bayesian Logic

Unless people want to see new weapons of mass destruction – in the form of vast swarms of lethal microdrones – spreading around the world, it’s imperative to step up and support the United Nations’ efforts to create a treaty banning lethal autonomous weapons. This is vital for national and international security.

Now here’s what leading Australian experts have to say

Distinguished Professor Mary-Anne Williams, Director of Disruptive Innovation at the Office of the Provost at the University of Technology Sydney

From its earliest beginnings, human history is a tale of an arms race littered with conflicts aimed at achieving more power and control over resources.

In the near future, weaponised robots could be like the velociraptors in Jurassic Park, with agile mobility and lighting fast reactions, able to hunt humans with high precision sensors augmented with information from computer networks. Imagine a robot rigged as a suicide bomber able to detect body heat or a heart beat that might be remotely controlled or able to make its own decisions about who and what to seek and destroy.

If we built a killer robot today, it would be dangerous in different ways – more like an unhappy, unstable toddler wielding an AK47 wanting to kill “bad” people. Robots today have limited perception and mobility capabilities in real world applications, but they are rapidly being enhanced with intelligence and autonomy.

There is no question robots can be developed at scale to efficiently seek and kill humans, possibly any human. The risk to human life is real and so too is robots’ vulnerability to hacking and to be used as a sophisticated technology espionage and terrorism.

I signed the killer robot ban in 2015 because state-sponsored killer robots are a terrifying prospect.

However, enforcing such a ban is highly problematic and it might create other problems; stopping countries such as Australia from developing defensive killer robots would leave us vulnerable to other countries and groups that ignore the ban.

Furthermore, today the potential loss of human life is a deterrent for conflict initiation and escalation, but when the main casualties are robots, the disincentives change dramatically and the likelihood of conflict increases.

So a ban on killer robots cannot be the only strategy. The nature of destructive weapons is changing; they are increasingly DIY. One can 3D print a gun, launch a bomb from an off-the-shelf drone, and turn ordinary cars into weapons.

Society and nations need much more than a killer robot ban.

Professor Anthony Finn, Director of the Defence and Systems Institute, Associate Head of the School Engineering – Research at the University of South Australia

Lethal autonomous robots differ from existing ‘fire-and-forget’ weapons because – although both prosecute targets without human involvement once their programming paradigms are satisfied – military advantage versus collateral damage estimation is undertaken by humans only for existing weapons; and humans are accountable under international humanitarian law.

Decisions regarding the legitimacy of lethal autonomous robots thus hinge on whether they comply with international humanitarian law. If lethal autonomous robots are to comply with international humanitarian law, they cannot be indiscriminate: they must be constrained by paradigms that classify targets by signature or region.

Ideally, lethal autonomous robots would balance key principles of international humanitarian law – discrimination and proportionality – using sophisticated algorithms yet to be developed; although it remains an open question as to whether they would ever successfully achieve this all of the time: it challenges humans.

However, the standard of international humanitarian law is one of reasonableness, not perfection; and the declaration of a weapon as unlawful centres on its inability to be directed discriminately at lawful targets under any circumstances, combined with the suffering caused by the effect of the weapon.

This is not changed by the autonomy of an engagement.

Finally, lethal autonomous robots might well reduce collateral damage – just as the current arsenal of fire-and-forget weapons have. This negates the notion that lethal autonomous robots should be declared unlawful per se.

The key is to establish circumstances under which their use might be permitted and to develop practical legal frameworks that allocate responsibility for infringements.

James Harland is an Associate Professor in Computational Logic in the School of Computer Science and IT at RMIT University in Melbourne

In the past, technology has often advanced much faster than legal and cultural frameworks, leading to technology-driven situations such as mutually assured destruction during the Cold War, and the proliferation of land mines.

I think we have a chance here to establish this kind of legal framework in advance of the technology for a change, and thus allow society to control technology rather than the other way around.

I have seen first-hand the appalling legacy of land mines in countries such as Vietnam (where RMIT has two campuses), where hundreds of people are killed or maimed each year from mines planted over 40 years ago.

Dr. Michael Harre is a Lecturer in Complex Systems Group and PM Program in the Faculty of Engineering & Information Technologies at the University of Sydney

It is an excellent idea to consider the positives and the negatives of autonomous systems research and to ban research that is unethical.

An equally important question is the potential for non-military autonomous systems to be dangerous, such as trading bots in financial markets that put at risk billions of dollars.

Soon we will also have autonomous AIs that have a basic psychology, an awareness of the world similar to that of animals. These AIs may not be physically dangerous but they may learn to be dangerous in other ways just as Tay, IBM’s chat-bot, learned to be anti-social on Twitter.

So what are our ethical responsibilities as researchers in these cases? These issues deserve a closer examination of what constitutes ‘ethical’ research.


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.