For the past 24 hours, scientists have been lining up to sign this open letter. Put simply, the proposal urges that humanity dedicate a portion of its AI research to “aligning with human interests”. In other words, let’s try to avoid creating our own, mechanised Horsemen of the Apocalypse.
While some scientists might roll their eyes at any mention of a Singularity, plenty of experts and technologists — like, say, Stephen Hawking and Elon Musk — have warned of the dangers AI could pose to our future. But while they might urge us to pursue our AI-related studies with caution, they’re a bit less clear on what exactly it is we’re being cautious against. Thankfully, others have happily filled in those gaps. Here are five of the more menacing destruction-by-singularity prophecies our brightest minds have warned against.
According to Stuart Armstrong, a philosopher and Research Fellow at the Future of Humanity Institute at Oxford:
The first impact of [Artificial Intelligence] technology is near total unemployment. You could take an AI if it was of human-level intelligence, copy it a hundred times, train it in a hundred different professions, copy those a hundred times and you have ten thousand high-level employees in a hundred professions, trained out maybe in the course of a week. Or you could copy it more and have millions of employees… And if they were truly superhuman you’d get performance beyond what I’ve just described.
Daniel Dewey, a research fellow at the Future of Humanity Institute, builds on Armstrong’s train of thought in Aeon Magazine. After all, when and if humans do become obsolete, we’ll become little more than pebbles in a robot’s metaphorical shoes.
“The difference in intelligence between humans and chimpanzees is tiny,” [Armstrong] said. “But in that difference lies the contrast between 7 billion inhabitants and a permanent place on the endangered species list. That tells us it’s possible for a relatively small intelligence advantage to quickly compound and become decisive.”
…. “The basic problem is that the strong realisation of most motivations is incompatible with human existence,” Dewey told me. “An AI might want to do certain things with matter in order to achieve a goal, things like building giant computers, or other large-scale engineering projects. Those things might involve intermediary steps, like tearing apart the Earth to make huge solar panels. A superintelligence might not take our interests into consideration in those situations, just like we don’t take root systems or ant colonies into account when we go to construct a building.”
You could give it a benevolent goal — something cuddly and utilitarian, like maximising human happiness. But an AI might think that human happiness is a biochemical phenomenon. It might think that flooding your bloodstream with non-lethal doses of heroin is the best way to maximise your happiness.
AI doesn’t need the explicit intent of exterminating us to be scary. As Mark Bishop, professor of cognitive computing at Goldsmiths, University of London, told The Independent:
I am particularly concerned by the potential military deployment of robotic weapons systems — systems that can take a decision to militarily engage without human intervention — precisely because current AI is not very good and can all too easily force situations to escalate with potentially terrifying consequences,” Professor Bishop said.
“So it is easy to concur that AI may pose a very real ‘existential threat’ to humanity without having to imagine that it will ever reach the level of superhuman intelligence,” he said.We should be worried about AI, but for the opposite reasons given by Professor Hawking, he explained.
Or maybe we’ll see the end coming long before it makes its way over. Except that by then, we’ll be too incompetent to survive even attempting to shut it down. Bill Joy, cofounder and Chief Scientist of Sun Microsystems, writes in Wired:
What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.
There’s something called the “grey goo” scenario, which essentially postulates that if robots start perpetually reproducing, we’ll essentially just get squeezed out amidst the massive mecha expansion. And if they need humans to power their out-of-control masses — as Discovery points out, we’re screwed.
If nanotechnology machines — which can be a hundred thousand times smaller than the diameter of a human hair — figure out how to spontaneously replicate themselves, it would naturally have dire consequences for humanity [source:Levin]. Especially if the research funded by the US Defense Department gets out of control: Researchers there are attempting to create an Energetically Autonomous Tactical Robot (EATR) that would fuel itself by consuming battlefield debris, which could include human corpses [source: Lewinski].
If nanotechnology did develop an appetite for human flesh — or some of the other things we rely on for survival, like forests or machinery — it could decimate everything on the planet in a matter of days. These hungry mini-robots would relegate our blue and green home to “grey goo,” a term that describes the unidentifiable particles left behind after the nanocritters eat buildings, landscapes and, well, everything else.