Why Asimov’s Three Laws Of Robotics Can’t Protect Us

Why Asimov’s Three Laws Of Robotics Can’t Protect Us

It’s been 50 years since Isaac Asimov devised his famous Three Laws of Robotics — a set of rules designed to ensure friendly robot behaviour. Though intended as a literary device, these laws are heralded by some as a ready-made prescription for avoiding the robopocalypse. We spoke to the experts to find out if Asimov’s safeguards have stood the test of time — and they haven’t.

First, a quick overview of the Three Laws. As stated by Asimov in his 1942 short story “Runaround”:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Later, Asimov added a fourth, or zeroth law, that preceded the others in terms of priority:

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

In Asimov’s fictional universe, these laws were incorporated into nearly all of his “positronic” robots. They were not mere suggestions or guidelines — they were embedded into the software that governs their behaviour. What’s more, the rules could not be bypassed, over-written, or revised.

Why Asimov’s Three Laws Of Robotics Can’t Protect Us

Invariably, and as as demonstrated in so many of Asimov’s novels, the imperfections, loopholes, and ambiguities enshrined within these laws often resulted in strange and counterintuitive robot behaviours. The laws were too vague, for example, by failing to properly define and distinguish “humans” and “robots.” Additionally, robots could unknowingly breach the laws if information was kept from them. What’s more, a robot or AI endowed with super-human intelligence would be hard-pressed to not figure out how to access and revise its core programming.

Scifi aside, and as many people are apt to point out, these Laws were meant as a literary device. But as late as 1981, Asimov himself believed that they could actually work. Writing in Compute!, he noted that,

I have my answer ready whenever someone asks me if I think that my Three Laws of Robotics will actually be used to govern the behaviour of robots, once they become versatile and flexible enough to able to choose among different courses of behaviour. My answer is, “Yes, the Three Laws are the only way in which rational human beings can deal with robots — or with anything else.

Now, some three decades later, we are inching closer to the day when we’ll have robots — or more accurately, the AI that runs them — that are versatile and flexible enough to choose different courses of behaviour. Indeed, it will only be a matter of time before machine intelligence explodes beyond human capacities in all the ways imaginable, including power, speed, and even physical reach.

Frighteningly, the margin for error will be exceedingly small. If super artificial intelligence (ASI) is poorly programmed or ambivalent to human needs, it could lead to a catastrophe. We need to ensure that AI is safe if we’re to survive its advent.

To learn if Asimov’s Three Laws could help, we contacted two AI theorists who have given this subject considerable thought: Ben Goertzel — an AI theorist and chief scientist of financial prediction firm Aidyia Holdings — and Louie Helm — the Deputy Director of the Machine Intelligence Research Institute (MIRI) and Executive Editor of Rockstar Research Magazine. After speaking to them, it was clear that Asimov’s Laws are wholly inadequate for the task — and that if we’re to guarantee the safety of SAI, we’re going to have to devise something entirely different.

An Asimovian Future?

I started the conversation by asking Goertzel and Helm about the ways in which Asimov’s future vision was accurate, and the ways in which it wasn’t.

“I think the kind of robots that Asimov envisioned will be possible before too long,” responded Goertzel. “However, in most of his fictional worlds, it seems that human-level robots were the apex of robotics and AI engineering. This seems unlikely to be the case. Shortly after achieving Asimov-style human-like robots, it seems that massively superhuman AIs and robots will also be possible.”

So the typical future world in Asimov’s robot stories, he says, is where most of life is similar to how it is today — but with humanoid intelligent robots walking around.

“It seems unlikely to come about — or if it does exist it will be short-lived,” he says.

For Helm, the robots are completely beside the point.

“The main issue I expect to be important for humanity is not the moral regulation of a large number of semi-smart humanoid robots, but the eventual development of advanced forms of artificial intelligence (whether embodied or not) that function at far greater than human levels,” Helm told io9. “This development of superintelligence is a filter that humanity has to pass through eventually. That’s why developing a safety strategy for this transition is so important. I guess I see it as largely irrelevant that robots, androids, or ’emulations’ may exists for a decade or two before humans have to deal with the real problem of developing machine ethics for superintelligence.”

A Good Starting Point?

Given that Asimov’s Three Laws were the first genuine attempt to address a very serious problem — that of ensuring the safe behaviour of machines imbued with greater-than-human intelligence — I wanted to know the various ways in which the Laws might still be deemed effective (or inspirational at the very least).

Why Asimov’s Three Laws Of Robotics Can’t Protect Us

“I honestly don’t find any inspiration in the three laws of robotics,” said Helm. “The consensus in machine ethics is that they’re an unsatisfactory basis for machine ethics.” The Three Laws may be widely known, he says, but they’re not really being used to guide or inform actual AI safety researchers or even machine ethicists.

“One reason is that rule-abiding systems of ethics — referred to as ‘deontology’ — are known to be a broken foundation for ethics. There are still a few philosophers trying to fix systems of deontology — but these are mostly the same people trying to shore up ‘intelligent design’ and ‘divine command theory’,” says Helm. “No one takes them seriously.”

He summarises the inadequacy of the Three Laws accordingly:

  • Inherently adversarial
  • Based on a known flawed ethical framework (deontology)
  • Rejected by researchers
  • Fails even in fiction

Goertzel agrees. “The point of the Three Laws was to fail in interesting ways; that’s what made most of the stories involving them interesting,” he says. “So the Three Laws were instructive in terms of teaching us how any attempt to legislate ethics in terms of specific rules is bound to fall apart and have various loopholes.”

Goertzel doesn’t believe they would work in reality, arguing that the terms involved are ambiguous and subject to interpretation — meaning that they’re dependent on the mind doing the interpreting in various obvious and subtle ways.

A Prejudice Against Robots?

Another aspect (and potential shortcoming) of the Three Laws is its apparent substrate chauvinism — the suggestion that robots should, despite their capacities, be kept in a subservient role relative to human needs and priorities.

Why Asimov’s Three Laws Of Robotics Can’t Protect Us

“Absolutely,” says Goetzel. “The future societies Asimov was depicting were explicitly substrate chauvinist; they gave humans more rights than humanoid robots. The Three Laws were intended to enforce and maintain that kind of social order.”

Helm sees it a bit differently, arguing that if we ever find ourselves in such a situation we’ve already gone too far.

“I think it would be unwise to design artificial intelligence systems or robots to be self-aware or conscious,” says Helm. “And unlike movies or books where AI developers ‘accidentally’ get conscious machines by magic, I don’t expect that could happen in real life. People won’t just bungle into consciousness by accident — it would take lots of effort and knowledge to hit that target. And most AI developers are ethical people, so they will avoid creating what philosophers would refer to as a ‘beings of moral significance.’ Especially when they could just as easily create advanced thinking machines that don’t have that inherent ethical liability.”

Accordingly, Helm isn’t particularly concerned about the need to develop asymmetric laws governing the value of robots versus people, arguing (and hoping) that future AI developers will use some small amount of ethical restraint.

“That said, I think people are made of atoms, and so it would be possible in theory to engineer a synthetic form of life or a robot with moral significance,” says Helm. “I’d like to think no one would do this. And I expect most people will not. But there may inevitably be some showboating fool seeking notoriety for being the first to do something — anything — even something this unethical and stupid.”

Three Laws 2.0?

Given the obvious inadequacies of Asimov’s Three Laws, I was curious to know if they could still be salvaged by a few tweaks or patches. And indeed, many scifi writers have tried to do just these, adding various add-ons over the years (more about this here).

“No,” says Helm, “There isn’t going to be a ‘patch’ to the Three Laws. It doesn’t exist.”

In addition to being too inconsistent to be implementable, Helm says the Laws are inherently adversarial.

“I favour machine ethics approaches that are more cooperative, more reflectively consistent, and are specified with enough indirect normativity that the system can recover from early misunderstandings or mis-programmings of its ethics and still arrive at a sound set of ethical principles anyway,” says Helm.

Goertzel echos Helm’s concerns.

“Defining some set of ethical precepts, as the core of an approach to machine ethics, is probably hopeless if the machines in question are flexible minded AGIs [artificial general intelligences],” he told io9. “If an AGI is created to have an intuitive, flexible, adaptive sense of ethics — then, in this context, ethical precepts can be useful to that AGI as a rough guide to applying its own ethical intuition. But in that case the precepts are not the core of the AGI’s ethical system, they’re just one aspect. This is how it works in humans — the ethical rules we learn work, insofar as they do work, mainly as guidance for nudging the ethical instincts and intuitions we have — and that we would have independently of being taught ethical rules.”

How To Build Safe AI?

Given the inadequacies of a law-based approach, I asked both Goertzel and Helm to describe current approaches to the “safe AI” problem.

“Very few AGI researchers believe that it would be possible to engineer AGI systems that could be guaranteed totally safe,” says Goertzel. “But this doesn’t bother most of them because, in the end, there are no guarantees in this life.”

Goertzel believes that, once we have built early-stage AGI systems or proto-AGI systems much more powerful than what we have now, we will be able to carry out studies and experiments that will tell us much more about AGI ethics than we now know.

“Hopefully in that way we will be able to formulate good theories of AGI ethics, which will enable us to understand the topic better,” he says, “But right now, theorizing about AGI ethics is pretty difficult, because we don’t have any good theories of ethics nor any really good theories of AGI.”

Why Asimov’s Three Laws Of Robotics Can’t Protect Us

He also added: “And to the folks who have watched Terminator too many times, it may seem scary to proceed with building AGIs, under the assumption that solid AGI theories will likely only emerge after we’ve experimented with some primitive AGI systems. But that is how most radical advances have happened.”

Think about it, he says: “When a group of clever cavemen invented language, did they wait to do so until after they’d developed a solid formal theory of language, which they could use to predict the future implications of the introduction of language into their society?”

Again, Goertzel and Helm are on the same page. The Machine Intelligence Research Institute has spent a lot of time thinking about this — and the short answer is that it’s not yet an engineering problem. Much more research is needed.

“What do I mean by this? Well, my MIRI colleague Luke Muehlhauser summarized it well when he said that problems often move from philosophy, to maths, to engineering,” Helm says. “Philosophy often asks useful questions, but usually in such an imprecise way that no one can ever know whether or not a new contribution to an answer represents progress. If we can reformulate the important philosophical problems related to intelligence, identity, and value into precise enough maths that it can be wrong or not, then I think we can build models that will be able to be successfully built on, and one day be useful as input for real world engineering.”

Helm calls it a true hard problem of science and philosophy, but that progress is still possible right now: “I’m sceptical that philosophy can solve it alone though since it seems to have failed for 3,000 years to make significant progress on its own. But we also can’t just start attempting to program and engineer our way out of things with the sparse understanding we have now. Lots of additional theoretical research is still required.”

IMAGE: MICHAEL WHELEN/ROBOTS OF DAWN.

Follow me on Twitter: @dvorsky


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.