New Report On Emerging AI Risks Paints A Grim Future

New Report On Emerging AI Risks Paints A Grim Future

A new report authored by over two-dozen experts on the implications of emerging technologies is sounding the alarm bells on the ways artificial intelligence could enable new forms of cybercrime, physical attacks, and political disruption over the next five to ten years.

The 100-page report, titled “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,” boasts 26 experts from 14 different institutions and organisations, including Oxford University’s Future of Humanity Institute, Cambridge University’s Centre for the Study of Existential Risk, Elon Musk’s OpenAI, and the Electronic Frontier Foundation. The report builds upon a two-day workshop held at Oxford University back in February of last year. In the report, the authors detail some of the ways AI could make things generally unpleasant in the next few years, focusing on three security domains of note — the digital, physical, and political arenas – and how the malicious use of AI could upset each of these.

“It is often the case that AI systems don’t merely reach human levels of performance but significantly surpass it,” said Miles Brundage, a Research Fellow at Oxford University’s Future of Humanity Institute and a co-author of the report, in a statement. “It is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persuasion, and physical target identification, as well as AI capabilities that are subhuman but nevertheless much more scalable than human labour.”

Indeed, the big takeaway of the report is that AI is now on the cusp of being a tremendously negative disruptive force as rival states, criminals, and terrorists use the scale and efficiency of AI to launch finely-targeted and highly efficient attacks.

“As AI capabilities become more powerful and widespread, we expect the growing use of AI systems to lead to the expansion of existing threats, the introduction of new threats, and a change to the typical character of threats,” write the authors in the new report.

They warn that the cost of attacks will be lowered owing to the scalable use of AI and the offloading of tasks typically performed by humans. Similarly, new threats may emerge through the use of systems that will complete tasks normally too impractical or onerous for humans.

“We believe there is reason to expect attacks enabled by the growing use of AI to be especially effective, finely targeted, difficult to attribute, and likely to exploit vulnerabilities in AI systems,” they write.

In terms of specifics, the authors warn of cyber attacks involving automated hacking, spear phishing, speech synthesis to impersonate targets, and “data poisoning.” The advent of drones and semi- and fully-autonomous systems introduces an entirely new class of risks; the nightmarish scenarios include the deliberate crashing of multiple self-driving vehicles, coordinated attacks using thousands of micro-drones, converting commercial drones into face-recognising assassins, and holding critical infrastructures for ransom. Politically, AI could be used to sway popular opinion, create highly targeted propaganda, and spread fake – but perhaps highly believable – posts and videos. AI will enable better surveillance technologies, both in public and private spaces.

“We also expect novel attacks that take advantage of an improved capacity to analyse human behaviours, moods, and beliefs on the basis of available data,” add the authors. “These concerns are most significant in the context of authoritarian states, but may also undermine the ability of democracies to sustain truthful public debates.”

Sadly, the era of “fake news” is already upon us. It’s becoming increasingly difficult to tell fact from fiction. Russia’s apparent misuse of social media during the last US presidential election showed the potential for state actors to use social networks in nefarious ways. In some respects, the new report has a “tell us something we didn’t already know” aspect to it.

Seán Ó hÉigeartaigh, Executive Director of Cambridge University’s Centre for the Study of Existential Risk and a co-author of the new study, says hype used to outstrip fact when it came to our appreciation of AI and machine learning – but those days are now long gone. “This report looks at the practices that just don’t work anymore – and suggests broad approaches that might help: for example, how to design software and hardware to make it less hackable – and what type of laws and international regulations might work in tandem with this,” he explained.

To mitigate many of these emerging threats, Ó hÉigeartaigh and his colleagues presented five high-level recommendations:

  • AI and ML researchers and engineers should acknowledge the dual use nature of their research

  • Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI

  • Best practices should be identified from other high-stakes technical domains, including computer security and other dual use technologies, and imported where applicable to the case of AI

  • The development of normative and ethical frameworks should be prioritised in each of these domains

  • The range of stakeholders and experts involved in discussions of these challenges should be expanded

In addition to these strategies, the authors say a “rethinking” of cyber security is needed, along with investments in institutional and technological solutions. Less plausibly, they say developers should adopt a “culture of responsibility” and consider the powers of data sharing and openness (good luck with that).

Ilia Kolochenko, CEO of web security company High-Tech Bridge, believes the authors of the new report are overstating the risks, and that it will be business as usual over the next decade. “First of all, we should clearly distinguish between Strong AI — artificial intelligence, which is capable of replacing the human brain – and the generally misused ‘AI’ term that has become amorphous and ambiguous,” explained Kolochenko in a statement emailed to Gizmodo.

He says criminals have already been using simple machine-learning algorithms to increase the efficiency of their attacks, but these efforts have been successful because of basic cyber security deficiencies and omissions in organisations. To Kolochenko, machine learning is merely an “auxiliary accelerator.”

“One should also bear in mind that [artificial intelligence and machine learning is] being used by the good guys to fight cybercrime more efficiently too,” he added. “Moreover, development of AI technologies usually requires expensive long term investments that Black Hats [malicious hackers] typically cannot afford. Therefore, I don’t see substantial risks or revolutions that may happen in the digital space because of AI in the next five years at least.”

Kolochenko is not wrong when he says that AI will be used to mitigate many of the threats made possible by AI, but to say that no “substantial” risks will emerge in the coming years seems a bit pie-in-the-sky. Sadly, the warnings presented in the new report will likely fall on deaf ears until people start to feel the ill effects of AI at a personal level. For now, it’s all a bit too abstract for citizens to care, and politicians aren’t yet prepared to deal with something so intangible and seemingly futuristic. In the meantime, we should remain wary of the risks, work to apply the recommendations proposed by these authors, and pound the message over and over to the best of our abilities.

[The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation]


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.