Robots Can’t Kill You — And Claiming They Can Is Dangerous

Robots Can’t Kill You — And Claiming They Can Is Dangerous

If we start holding robots responsible for their actions — and accidents — we let their human designers and operators off the hook.

Robots’ involvement in human deaths is nothing new. The recent death of a man who was grabbed by a robot and crushed against a metal plate at a Volkswagen factory in Baunatal, Germany, attracted extensive media attention. But it is strikingly similar to one of the first recorded case of a death involving an industrial robot 34 years ago.

These incidents have happened before and will happen again. Even if safety standards continue to rise and the chance of an accident happening in any given human/robotic interaction goes down, such events will become more frequent simply because of the ever-increasing number of robots.

This means it is important to understand this kind of incident properly, and a key part of doing so is using accurate and appropriate language to describe them. Although there is a sense in which it is legitimate to refer to the Baunatal incident as a case of “robot kills worker”, as many reports have done, it is misleading, verging on the irresponsible, to do so. It would be much better to express it as a case of “worker killed in robot accident”.

Admittedly, putting it that way isn’t as eye-grabbing, but that’s precisely the point. The fact is, robots, despite what one might be encouraged to believe from sci-fi, and despite what may happen in the far future, currently lack what we consider real intentions, emotions and purposes. And contrary to recent alarmist claims, nor are they going to acquire those capacities in the near future.

They can only “kill” in the sense that a hurricane (or a car, or a gun) can kill. They can’t kill in the sense that some animals can, let alone in the human sense of murder. Yet murder is likely to be what springs to most people’s minds when they read “robot kills worker”.

High stakes

Insisting on getting this language right isn’t an academic exercise in pedantry. The stakes are high. For one thing, an unwarranted fear of robots could lead to another unnecessary “artificial intelligence winter“, a period where the technology ceases to receive research funding. This would delay or deny the considerable benefits robots can bring not just to industry but society in general.

But even if you’re not optimistic about the benefits of robots, you should still want to get this issue right. Since robots don’t have responsibility, humans are the ones responsible for what robots do. However, as robots become more prevalent, it will increasingly appear as if they actually have their own autonomy and intentions, for which it will seem they can and should be held responsible.

Although there may eventually come a day when that appearance is matched by reality, there will be a long period of time (which has already begun) when this appearance will be false. Even now we are already tempted to categorise our interactions with robots into what we are responsible for and what they are responsible for. This raises the danger of scapegoating the robot, and failing to hold the human designers, deployers and users involved fully responsible.

Moral robots or morally made robots?

It’s not just those reporting on robots that need to get the language right. Policymakers, salespeople, and those in research and development who are designing the robots of today and tomorrow need to keep a clear head. Instead of asking “what’s the best way to make moral robots?”, we should ask “what’s the best way to morally make robots?”.

This subtle change in the language, if adopted, would result in big changes in design. For example, trying to give robots moral laws to follow would require us to provide them with a human-like level of common sense to apply those laws, something that would be far harder. Instead of following such a design dead end we could aim for machines that are a results of the designers’ own morals, just as we try to ethically design non-robotic technology.

In the Volkswagen accident, a company spokesperson reportedly said “initial conclusions indicate that human error was to blame, rather than a problem with the robot”. Other reports spoke of it being human error rather than the robot “being at fault” or “accountable”. This implies that, in other circumstances, the robot could have been considered to blame for the accident.

If there was a “problem with the robot”, be it faulty materials, a misperforming circuit board, bad programming, poor design of installation or operational protocols, that problem — or not anticipating it — would still have been due to human error. Yes, there are industrial accidents where no human or group of humans is to blame. But we mustn’t be tempted by the appearance of agency in robots to absolve their human creators of responsibility. Not yet anyway.

Ron Chrisley is Director of the Centre for Cognitive Science at University of Sussex.

This article was originally published on The Conversation. Read the original article.

Picture: AP Images


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.