These 23 Principles Could Help Us Avoid An AI Apocalypse

These 23 Principles Could Help Us Avoid An AI Apocalypse

Science fiction author Isaac Asimov famously predicted that we’ll one day have to program robots with a set of laws that protect us from our mechanical creations. But before we get there, we need rules to ensure that, at the most fundamental level, we’re developing AI responsibly and safely. At a recent gathering, a group of experts did just that, coming up with 23 principles to steer the development of AI in a positive direction — and to ensure it doesn’t destroy us.

Image: Anthony Hopkins in Westworld.

The new guidelines, dubbed the 23 Asilomar AI Principles, touch upon issues pertaining to research, ethics and foresight — from research strategies and data rights to transparency issues and the risks of artificial superintelligence. Previous attempts to establish AI guidelines, including efforts by the IEEE Standards Association, Stanford University’s AI100 Standing Committee and even the White House, were either too narrow in scope, or far too generalised. The Asilomar principles, on the other hand, pooled together much of the current thinking on the matter to develop a kind of best practices rulebook as it pertains to the development of AI. The principles aren’t yet enforceable, but are meant to influence the way research is done moving forward.

Artificial intelligence is at the dawn of a golden era, as witnessed by the emergence of digital personal assistants like Siri, Alexa and Cortana; self-driving vehicles; and algorithms that exceed human capacities in meaningful ways (in the latest development, an AI defeated the world’s best poker players). But unlike many other tech sectors, this area of research isn’t bound by formal safety regulations or standards, leading to concerns that AI could eventually go off the rails and become a burden instead of a benefit. Common fears include AI replacing human workers, disempowering us and becoming a threat to our very existence.

To address these and other issues, the Future of Life Institute recently brought together dozens of experts to come up with a set of core principles to nudge AI development in positive directions and steer us clear of apocalypse scenarios. Attendees of the conference came from diverse backgrounds, including engineers, programmers, roboticists, physicists, economists, philosophers, ethicists and legal scholars.

The 23 Asilomar AI Principles have since been endorsed by nearly 2300 people, including 880 robotics and AI researchers. Notable supporters include physicist Stephen Hawking, SpaceX CEO Elon Musk, futurist Ray Kurzweil and Skype co-founder Jaan Tallinn.

These 23 Principles Could Help Us Avoid An AI Apocalypse
Experts converse over a meal at the Asilomar conference. (Image: FLI)

Experts converse over a meal at the Asilomar conference. (Image: FLI)

“Our hope was — and will be going forward — to involve a very diverse set of stakeholders, but also to ground the process in the expert knowledge of the AI developers who really understand what systems are being developed, what they can do, and where they may be going,” noted participant Anthony Aguirre, a physicist at the University of California, Santa Cruz.

Discussions were often contentious during the conference, but a high level of consensus eventually emerged. To set a high bar, the FLI organisers only accepted a principle if at least 90 per cent of the attendees agreed with it.

The principles were organised into three sections — research issues, ethics and values, and longer-term issues. Under research, principles included the need to create “beneficial intelligence” as opposed to “undirected intelligence” (more on this in just a bit), and an admonition for AI developers to maintain a healthy dialogue with policy-makers. Ethics and values included the need to ensure safety at all stages of AI development, the instilling of human values into a machine mind, the avoidance of an AI arms race and the need to maintain human control. Long-term considerations included risk assessments, control measures and adherence to the so-called “capability caution” — a warning that we should never underestimate the potential power of advanced AI.


“To think AI merely automates human decisions is like thinking electricity is just a replacement for candle.”


Patrick Lin, a conference attendee and the director of the Ethics + Emerging Sciences Group at California Polytechnic State University, says the Asilomar AI principles sprung from “a perfect storm of influences” he hadn’t encountered before. “This was a standard-setting exercise in a field that has no cohesive identity, making the exercise much more difficult,” he told Gizmodo.

While perfect consensus was not expected, everyone at the conference could agree that AI is going to be impactful in ways we’ve never seen before. “To think AI merely automates human decisions is like thinking electricity is just a replacement for candles,” Lin said. “Given the massive potential for disruption and harm, as well as benefits, it makes sense to establish some norms early on to help steer research in a good direction, as opposed to letting a market economy that’s fixated mostly on efficiency and profit to shape AI.”

These 23 Principles Could Help Us Avoid An AI Apocalypse
Asilomar group photo. List of attendees can be found here. (Image: FLI)

Asilomar group photo. List of attendees can be found here. (Image: FLI)

Some of the items on the list are a bit of a no-brainer, such as the need to avoid an AI arms race and to ensure safety. But other items, though simply stated, are considerably more complex. Take the injunction to develop “beneficial intelligence”. It’s vague, but Aguirre says that’s a strength at this early juncture.

“We hope people will interpret it in many ways,” he told Gizmodo. “The crucial aspect of this principle, as I see it, is that benefit is part of the equation. That is, AI developers are actively thinking about what the ramifications of the tools they are developing might be, rather than just making the tools better, stronger, and faster.”

Another complicated issue is the one about maintaining human control. According to this principle,

Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

The trouble is, AI is becoming increasingly better and more efficient at many tasks, making us more inclined to offload those responsibilities. Today, for example, algorithms have taken over as our stock traders. There’s concern that battlefield robots will eventually be “taken out of the loop” and given the autonomy to kill enemy combatants of their own accord. Aguirre says we will cede some decisions to AI systems, but we should do so with care.

“I already listen to Google Maps, for example, which helps me decide what route to take through traffic,” said Aguirre. “But recently I was caught in a mudslide-caused traffic jam for an hour with zero movement, while Google doggedly informed me it was a 48 minute trip. I eventually ignored the instructions and drove home a very roundabout way that saved me hours.” He says it’s important that we keep our eye on what decisions we are delegating and which ones we really want to keep for ourselves.

Many of the principles on the new list fall into the “easier said than done” category. For instance, there’s an injunction for AI developers to be open and transparent about their research. Given the high degree of competition in the tech sector, that seems like a pipe dream. Aguirre agrees it’s an important and difficult issue.

“One positive sign is the partnership on AI, envisaged in part as a group that can create collaborative dynamics where the benefit of participating counter the competitive ‘cost’ of sharing information,” he said. “While big advances may come from scrappy little startups with small resources, thus far much of the real advance has been in powerhouses like Google/Deepmind, IBM, Facebook, etc.”

In terms of getting developers to comply with these principles, Lin says people tend to have a hard time understanding the value of norms, leading to calls for the “fangs of law”. Failure to disclose security faults in an AI, for example, could lead to stiff fines and penalties.

“Even without the teeth of legal punishment, norms are still powerful in shaping behaviour, whether it’s at home, or in the classroom, or in international relations,” Lin said. “For instance, the US is not a signatory to the Ottawa Treaty that bans anti-personnel landmines. But we still abide by it anyway, because breaking such a strong norm has severe political and therefore economic consequences. The global condemnation that would follow is enough of a deterrent (so far).”

But this might not always be the case, requiring strict — and even international-level — oversight. It isn’t impossible to imagine a company like Facebook or Google developing a superintelligent AI that can simultaneously boosts profits to unprecedented levels and pose a catastrophic threat to humanity. With profits soaring, a big company might not care about violating “norms” and having to face global condemnation. Alarmingly, we have no provision for this sort of thing.

Another principle that will be hard, if not impossible, to implement is the one stating that:

AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

This concept, known as “value alignment”, is an often-cited strategy to keep AI safe, but there’s no universally agreed-upon way to define a “human value”.

Lin agrees that value alignment is a particularly thorny problem. “At Asilomar, there was a lot of talk about aligning AI with human values, but little discussion of what these human values are,” he said. “Technologists understandably are uncomfortable confronting this prior question, but it’s an essential one. If you have a perfectly aligned AI [that is, an AI perfectly aligned with human values] but calibrated to the wrong human values, then you just unleashed a terrible technology or even weapon.” As a classic (but extreme) example, an AI could be told that making lots of paperclips is good — and it proceeds to turn the entire planet into paperclips. As Lin fears, a seemingly innocuous value could pose a serious threat.


“If we envision a time when AIs exist that are as capable or more so than the smartest humans, it would be utterly naive to believe that the world will not fundamentally change.”


Finally, there’s the issue of artificial intelligence as an existential threat. On that topic, the Asilomar guidelines speak to the importance of overseeing any machine capable of recursive self-improvement (that is, the ability of an AI to continually improve its own source code, leading to a potential “run away” effect, or program itself with a set of features that run contrary to human values). The challenge there will be in implementing the required safeguards, and ensuring that the machine stays “in bounds” when it comes to certain modifications. This is not going to be easy.

A final important item on the list is the “capability caution”. Given that there’s no consensus on how powerful AI might become, it would be wise to refrain from making grand proclamations about its maximum potential. In other words, we should never, ever, underestimate the power of artificial superintelligence.

“No current AI system is going to ‘go rogue’ and be dangerous, and it’s important that people know that,” Aguirre said. “At the same time, if we envision a time when AIs exist that are as capable or more so than the smartest humans, it would be utterly naive to believe that the world will not fundamentally change. So how seriously we take AI’s opportunities and risks has to scale with how capable it is, and having clear assessments and forecasts — without the press, industry or research hype that often accompanies advances — would be a good starting point.”

For now, this list of principles is just that — a list of principles. There’s no provision to have these guidelines enforced or updated, nor is there any call for institutional or governmental oversight. As it stands, developers can violate these best-of-breed recommendations, and face no consequences.

So will these 32 principles keep us safe and protect us from an AI apocalypse? Probably not, but the purpose of this exercise was to pool together the best ideas on the matter in hopes of inspiring something more concrete. As imperfect and idealistic as these principles may be, we’re better off having them.

[Future of Life Institute]


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.