Are We Overthinking The Dangers Of Artificial Intelligence?

Are We Overthinking the Dangers of Artificial Intelligence?

Futurists and science fiction authors often give us overly grim visions of the future, especially when it comes to the Singularity and the risks of artificial superintelligence. Scifi novelist David Brin talked to us about why these dire predictions are often simplistic and unreasonable.

Illustration: Jim Cooke

Our civilisation faces no shortage of risks in the foreseeable future, from the devastating effects of climate change to the myriad number of technological hazards set to appear in the near future. Among these existential threats, perhaps none is more frightening than the prospect of artificial superintelligence. Its advent could knock us from our perch, forever delegating us to a secondary role, or worse, complete irrelevance.

Or, as David Brin argues, we just might be able to prevent such a disaster from occurring in the first place. The key, he says, is to not get bogged down in the pessimism and nihilism that's suddenly become fashionable. What's more, he worries that we've bought into overly simplified visions of the future; we like to talk about the destination, but rarely do we talk about the journey.

Brin, a futurist and author of such novels as Startide Rising and Existence, recently joined me in an email exchange where we discussed these topics.

George: Before we get to your views on building safe artificial superintelligence, let's talk about the late Michael Crichton, a fellow scifi author. You've been an outspoken critic of his work since I can remember. What is it that you find so problematic about his cautionary tales? And how would you compare his pessimism to popular conceptions of technology and the future?

David: First off, I liked Michael and wish he had lived much longer, to keep infuriating me while pushing the sales and box office for sci fi to stratospheric heights. Indeed, I started modelling one of my characters, in Existence, after him, long before I knew he was ill. (One sign of respect: his character is the one who "gets-the-girl.")

Are We Overthinking the Dangers of Artificial Intelligence?

On the other hand, I did find the basic pattern of his craft to be simplistic and repetitive. It always - without break - consisted of "here's another area where scientists want to take us, picking up tools that ought to be reserved to God. Such hubris will be punished!" Oh, and after the climax, as the book or movie ends, everything gets restored to status quo (except the dead). Nothing - especially society - is ever allowed to change.

I attended the infamous speech Crichton gave at a meeting of the AAAS (American Association for the Advancement of Science) in which he spent an hour essentially repeating: "I don't hate science at all! I love science! Hey, they're just stories!"

Well, no, they aren't just stories. If you count the number of pages in which Crichtonian characters rail against technological overreach, it soon becomes clear that tales like Jurassic Park and Prey and Westworld are propaganda. Nothing wrong with that! Dire warnings are important. It is possible to make awful mistakes, as we charge into the future. Another dyspeptic grouch — Dr. Jared Diamond — scared sense into millions with Collapse. George Orwell and others showed — the highest form of science fiction is the self-preventing prophecy.

But there is a difference between a useful cautionary tale and cookie-cutter dystopias. The latter do not offer useful warnings, only cliches.

Alas, Crichton's plots were never really about scientific hubris. Every calamity he portrays happens because some pridefully arrogant techno feat is rushed in secrecy! Thus evading the criticism and reciprocal accountability that is the heart and soul of real science. His scenarios depend on scientists evading this process (and they do, sometimes!) Secrecy is the villain. I only wish more readers came away understanding that. Indeed, I doubt that Michael ever did.

George: Overly simplistic stories about dinosaurs and nanotechnology run amok are one thing, but artificial superintelligence (ASI) is poised to be a horse of a different colour. I recently posted an article on io9 about how ASI might eventually give birth to itself, the result of a recursively self-improving AI. My fear is that this burgeoning intelligence (or intelligences) could go on to destroy or seriously degrade human civilisation owing to our inability to understand or control it. After reading my article, you complained that I omitted the process issue — that I was invoking a Crichtonian science-goes-wrong scenario and that my analysis assumed a certain level of secrecy. You went on to add that, "Efforts to develop AI that are subject to the enlightenment process of reciprocal scrutiny and criticism might see their failure modes revealed and corrected in time." I find this to be a very intriguing and encouraging idea, so I'm hoping you can elaborate on that.

David: The whole idea behind dire warning tales is that such 'self-preventing prophecies' might help us evade terrible errors, poking sticks ahead of us as we rush into the future, finding and discussing and avoiding the mine fields and quicksand pools and snake pits as we hurry toward a possible better era. Dire warnings that only repeat hoary cliches, or that portray good=pretty and evil has red, glowing eyes... these aren't helpful. Nor are demigod Chosen One saviors. We're not going to avoid disasters that way.

Indeed, when the most common lessons are "ALL your neighbours are all sheep, no democratic institution can ever be trusted, and science is always wrong," such despairing tales do far more harm than good.

Are We Overthinking the Dangers of Artificial Intelligence?

But sure, let's go after failure modes! Some democratic institutions can go bad, or veer toward Big Brother, so lets have movies about that! Even a decent society might fail to adapt well to, say, instant genetic testing (Gattaca), or video game addiction (Existenz), or uneven access to surveillance technology (Enemy of the State), and so on. Such films and novels provoke discussion and course corrections.

That's what happened in the 1980s when sci fi concerns about genetic research led to a moratorium under the "Asilomar Process," when the biology community soberly appraised the risks and delivered a suite of procedures and best-practices... so they could aim for the win-win, both increased safety and care... and rapid scientific progress. And yes, science fictional warnings helped to make that happen!

We need that win-win process to work! I am involved in similar discussions right now, concerning SETI. Such open and reciprocal criticism and negotiation is what adults do, and it is the only way we will get both grownup concern about dangers plus the rapid progress we need, in order to save humanity and the world. The problem with Hollywood, and cable news and yes, much written sci fi, as well, is that the very notion of adult process is anathema! it is seen as a killer of what Hollywood needs most... drama! Fast-paced peril and pure heroes opposing pure evil!

Are We Overthinking the Dangers of Artificial Intelligence?

Hence, every time you see an alien or an AI, it is either out to get us, or else in danger from our own government. Well, there have been exceptions. Lucy and Her were flicks that tried to evade such cliches. But it is rare.

Hence the irony. We will watch AI very carefully, having been shown the potential downside repeatedly in films. Mayhaps that will help us to avoid the worst (or at least most cliched) failure modes? Ah, but then there is the rub... those nascent AIs will have watched all our dire warning films! And what might that suggest to them?

George: Just to be clear, I tend to not base my predictions on scifi, though there are times when the genre can be extremely illuminating. Rather, when I do my foresight work — like trying to figure out how and why an ASI might destroy us — I employ an analysis that assumes a kind of low regulation, low base scenario. I take the pessimistic, and admittedly unrealistic view, that nothing (or very little) will be done to address current technological trends and their ultimate manifestations. The resulting analysis, which may sound doom-and-gloom, has the same intention of so-called scare-mongering scifi — it's intended to prompt discussion and facilitate action such that the prediction will not come true.

David: I think your overall approach, which is to ponder: 'what if no one acts to deal with looming problems', is of course one of the important thought experiments. We have certainly seen, already, that humanity can stare an onrushing dilemma right in the face and, like a deer in the headlights, do nothing till it's too late. So it was with the rise of Hitler. So it was with the 300 year tobacco addiction. So it was, with 6,000 years of the filthy habit of inherited hierarchy and feudalism. So it appears now to be, with the cult of climate change denialism.

At the opposite extreme are examples of human societies acting with alacrity and determination. No one, in 1980, would have imagined that every species of whale would still be around — their numbers still increasing — in 2014. The ozone hole problem demanded less sacrifice by vested interests than dealing with the greenhouse effect will, so we simply went ahead and fixed it! When genetic engineering started scaring everyone, thirty years ago, biologists called a moratorium and met at Asilomar to thrash out a set of Best Practices that has worked astonishingly well, allowing us to both have rapid science and much more confidence in laboratory safety.

So which will happen with the rise of AI? Isaac Asimov, in his Robots Books, foresaw a worried public demanding fierce safeguards, so that the famous Three Laws were embedded into the basic architecture of positronic brains, so deeply and thoroughly that they could never be torn out. Barring some traumatic event, I don't see that kind of relentless attention to safeguards arising, in today's pell-mell infotech industry. But some good minds are exploring how it might be done.

George: You say that secrecy is anathema to the development of safe technologies, and I wholeheartedly agree. As you've pointed out for years, open societies are conducive to criticism and error correction, and they diminish the tendencies for societies and institutions to become inefficient, corrupt, or self-serving. But this isn't necessarily where I see our society headed. Sure, surveillance technologies are increasingly stripping us of our privacy, but corporations and military organisations are becoming more secretive than ever.

David: I am accused of being too moderate and pragmatic. But I am fiercely and militantly moderate! Dogmas of both left and right seem lobotomizing to me and we should be way more multidimensional, by now, than a single silly "axis" metaphor.

I refuse to go into a froth or panic, because corporations and governments know about me — especially since nothing on Earth will prevent elites from seeing. But we have to become ferociously determined that we will be empowered to look back! If the public has the means — and habits — of sousveillance, protecting whistle blowers, for example, then all future conspiracies will have to remain small, because they will be able to trust only a few shadows and a few henchmen at a time.

Are We Overthinking the Dangers of Artificial Intelligence?

That is what transparency means. Not an end to all shadows… or an end to privacy… but a growing sense that we can catch voyeurs and peeping toms with our own cameras and make them back off. That elites will tread carefully, because any abused person might cry out attention from the world. It won't be equal — life never was — but we may be able to preserve the gains of the Enlightenment Experiment, and maybe advance them further.

George: I know you're familiar with the work that DARPA is doing, along with the extremely well-funded efforts of companies to develop amoral and predatory Wall Street AI trading programs. What do you have to say about this — and the frightening prospect of having to live alongside ASI in perpetuity — a highly malleable, dynamic, and diverse existential threat.

David: Our prospects depend on which of the six general categories of AI methodology will actually bring artificial intelligence into being. In Existence I describe number six, which gets the least attention, even though it is exactly the one approach that we know of, that ever made intelligent beings. Us. And that method is lengthy childhood, interacting physically with the real world.

If that turns out to be the one that works (and after all, it has worked ten billion times during the last million years), then there is a real chance for a "soft landing." That AI beings will have to spend years in small robot bodies, fostered into human homes. And by the time they achieve autonomy, they will think of themselves as human beings — who happen to be built of silicon and steel. And who, despite adolescent rebellion, still wind up loving (and not stomping) mum and dad.

We can do that. Foster (and love) new intelligences. We know how to do that.

Follow me on Twitter: @dvorsky

WATCH MORE: Tech News


Comments

    How Djarapa made Wulgaru

    'First time everybody in our tribe were happy; happy until an old fool called Djarapa tried to make magic songs
    over wood, stone and red ocre paint.'
    Tula went on to explain how old Djarapa cut a piece of wood from a green tree and this he trimmed to look like
    the body of a human being. Next he made the legs and arms from pieces of wood and for knee and arm joints
    he used rounded stones that he had gathered up in a riverbed. After putting them together with red-ochred
    string he painted ears, nose and eyes in the thing and as he painted he chanted a very magic song that had
    been taught to him by a now dead tribal medicine man.
    'Good song-man.' said Tula, and when I asked did he know the chant he looked horrified and explained that it
    was, 'proper danger song...suppose wrong man get that song then straight-away him kill everybody, one
    time...all-a-same lightning.
    All day and night Djarapa chanted, and beat his tap-sticks over the lifeless sbits of wood and stone. he chanted
    till his throat became dry and hoarse, and at last, in despair, he gathered up his hunting weapons and went his
    way.
    And as he walked along Djarapa heard a loud clanking sound with the crashing of many trees behind him, and
    looking around he beheld the terrible monster of wood and stone shambling along on his trail. Its arms twisted
    and beat the air and he noticed that these flailing arms were the things that beat down the trees as it moved
    along. The creaking noises he heard ame from the creatures knees and arm joints, and every now and then the
    monster opened its mouth and snapped its jaws togwther that the white cockatoos that followed overhead
    screeched a warning to the other animals and birds of the bush. When this happened the newly created thing
    opened its eyes so that they all blazed, 'all-a-same stars'.
    'Djarapa dead-fright now when that devil-devil big-eye been come close up alonga his track,' Tula explained,
    'but when Djarapa stop then that Wulgaru thing stop too and when him run that devil-devil run too. Djarapa
    can't lose it.'
    Trembling with fear Djarapa noticed that the thing of his creation was only following him by sight, so he leapt
    behind dark-green bush, then doubling back on his trail, he stood behind a large salmon-gum tree as the
    shuffling monster went on, finally to enter a big lagoon. Watching that spot in amazement the terrified creator
    of monsters beheld it emerge from the opposite bank and move off into the jungle beyond.
    'Proper fool that Djarapa. Man make Devil-Devil...now he can’t kill it...make trouble for everybody,' bitterly
    commented Tula.

    Source: 'Tales from the Aboriginies', By Bill Harney

    Somehow... I dont think you can be trusted with a spanner.

    A true AI isn't a matter of code and data, a true AI will need the ability to self evolve, to become more than its original design, and when machines can solve problems and overcome obstacles creatively, then we are in trouble.

    Totally agree. Toasters are no threat to humanity. Not if we are vigilant and use common sense. Posted to the Elon Musk facebook group http://www.facebook.com/groups/ElonMusk

      Thanks for sharing, now please go back to watching your toaster vigilantly...

    I dont think we are overthinking the dangers, if anything, we are underthinking them.
    The classic example is the movie-trope of a robot apocalypse... that think the end of the world will be an evil AI that thinks humans have no value, are illogical, or attacks in self-defence.

    The greatest threat to survival is not intelligence, its stupidity, either the stupidity of the developer that makes a glitch, a hacker that breaks the system, or the artificial intelligence system does something stupid. Artificial Stupidity will ruin the world.

    Telephone and Banking networks go down cause of simple failures, do you honestly think an AI will do better... the AI itself becomes another point of failure that may (and probably will) operate with minimal supervision. Google doesnt want driving wheels in their cars, cause they dont trust people, thats some messed up if they cant see a single reason to have a driving wheel in case of failure.

    This article about AI is very interesting too http://ai.business/2016/05/29/how-artificial-intelligence-can-change-education/

    I think the best way would be to not program in the "killhuman.exe" subroutine.

    I wish the interviewer had raised the Manhattan project as an example of super secret massive development projects. Brin seems to avoid this obvious reality - that there will most likely only be one super intelligence emerging - there can really only be one - and it better be the US that gets there first... Just like in WW2, the idea that the Nazis would have the A bomb first is just unthinkable...

Join the discussion!

Trending Stories Right Now