The Information Technology & Innovation Foundation has released its nominees for its annual Luddite Awards. Recognising "the worst of the year's worst innovation killers", this year's crop includes everything from restrictions on car-sharing to bans on automatic licence plate readers. But by referring to "AI alarmists" as neo-Luddites, the ITIF has gone too far.
The ITIF is a not-for-profit think tank based out of Washington, DC that focuses on public policies that encourage technological innovation. Each year, the ITIF puts together a list of what it believes are the year's worst innovation killers. Named in honour of Ned Lud, an Englishman who led a movement in the early 19th century to destroy mechanised looms, the award recognises the most egregious examples of an organisation, government, or individual thwarting the progress of technological innovation.
"Neo-Luddites no longer wield sledgehammers, but they wield something much more powerful: bad ideas," writes Robert D. Atkinson, ITIF's founder and president. "For they work to convince policymakers and the public that innovation is the cause, not the solution to some of our biggest social and economic challenges, and therefore something to be thwarted." He says "they seek a world that is largely free of risk, innovation, or uncontrolled change".
In no particular order, here are this year's nominees:
- Alarmists tout an artificial intelligence apocalypse
- Advocates seek a ban on "killer robots"
- States limit automatic licence plate readers
- Europe, China, and others choose taxi drivers over car-sharing passengers
- The paper industry opposes e-labelling
- California's governor vetoes RFID in driver's licenses
- Wyoming outlaws citizen science
- The Federal Communications Commission limits broadband innovation
- The Center for Food Safety fights genetically improved food
- Ohio and others ban red light cameras
The ITIF issued an accompanying report explaining this year's crop of nominees, so if you want a detailed explanation for each item above I suggest you check it out. The institute also launched an online poll asking the public to vote for their favourite entry. The "winner" will be announced sometime in January.
Looking at the list of nominees, the last seven items make sense, though I can kind of understand why RFID tags in driver's licenses could be seen as a privacy and security concern. But as for the first two, and the listing of "AI alarmists" as neo-Luddites, well, now that's got me a bit perturbed.
Preventing an AI Apocalypse
The ITIF's complaint about alarmists touting an artificial intelligence apocalypse has to do with an open letter crafted by the Future of Life Institute earlier this year. Bill Gates, Stephen Hawking, Elon Musk, and other public figures signed the letter, which warned about the potential for AI to eventually escape from our control and emerge as an apocalyptic threat. At the same time, however, the signatories pushed for responsible AI oversight as a way to mitigate risks and ensure the "societal benefit" of the technology.
(Image: Avengers: Age of Ultron (2015))
But to the ITIF, this is just another attempt to stall important innovation. What's more, the institute claims that AI is apparently nothing to worry about because it's too far off in the future.
"Whether such systems will ever develop full autonomy is a debatable question, but what should not be debatable is that this possible future is a long, long way off (more like a century than a decade), and it is therefore premature to be worrying about 'Skynet' becoming self-aware," says Atkinson, "Indeed, continuing the negative campaign against artificial intelligence could potentially dry up funding for AI research, other than money for how to control, rather than enable AI."
Atkinson claims these sci-fi doomsday scenarios are making it harder for the public, policymakers, and scientists to support more funding for AI research.
Tellingly, the ITIF failed to mention the recently announced not-for-profit OpenAI research company co-founded by Elon Musk. The $US1 billion initiative is committed to "advancing digital intelligence in a way that's most likely to benefit humanity as a whole". The project joins other similar initiatives launched by prominent companies such as Google, Apple, and IBM. Several similar academic initiatives exist as well, including the Future of Humanity Institute at Oxford and the recently launched Centre for the Study of Existential Risk at the University of Cambridge.
These initiatives — not to mention the billions being spent on AI research and development around the world — show that Atkinson's concerns are overstated. What's more, and in the spirit of the FLI open letter, it's definitely not too early to be thinking about the potential risks. The survival of humanity could be at stake. The advent of strong artificial intelligence — and especially artificial superintelligence — could prove to be the most disruptive technological innovation in the history of our species, so it's critical that we get it right. Clearly, we're not talking about Ned Lud's steam-powered looms.
Death to the Killing Machines
And then there's the issue of autonomous weaponry. Part of the ITIF's complaint has to do with another open letter commissioned by the FLI, one calling for an outright ban on autonomous killing machines. Signatories included Hawking, Musk, Steve Wozniak, Noam Chomsky, MIT physicist Max Tegmark, and Daniel C. Dennett.
(Image: Robocop (1987))
The ITIF also had a problem with a United Nations meeting held earlier this year to consider a formal ban or other restrictions on killer AI, and a special report penned by Human Rights Watch and Harvard Law School that urged for a moratorium on such weapons.
Atkinson says the arguments against killing machines "overlooks the fact that the military clearly will benefit, because substituting robots for soldiers on the battlefield will increase a military's capabilities while substantially decreasing the risk to its personnel," adding that it's "possible that autonomous weapons could be programmed to engage only known enemy combatants, which may even lead to a reduction in civilian casualties."
Atkinson concludes by saying that, "the battle to ban autonomous weapons, much like the fight over artificial intelligence, works against the societal goal of building innovations that will improve human lives."
The ITIF is right to point out that robotic weapons have the potential to reduce deaths on the killing field, but the notion that these systems will be capable of discriminating between combatants and civilians is tough to swallow. What's more, as with any military innovation, it has to be assumed that the enemy will eventually develop its own version. Finally, an AI arms race could lead to superintelligent systems that struggle for dominance outside of human comprehension and control. Should this happen, we'll have no choice but to sit back and hope we don't get destroyed in the process.
A fundamental problem with the ITIF is its unwavering faith in human societies to adapt to its technologies. To date, we've largely succeeded in doing so. We've even managed to survive — at least for now — the development of the first apocalyptic-scale weapon in the form of nuclear bomb. Regrettably, other dangers await us in the future, so we best be ready.
At the same time, we deserve the right to warn of these potential perils without the fear of being branded a Luddite.