Are We Overthinking The Dangers Of Artificial Intelligence?

Are We Overthinking The Dangers Of Artificial Intelligence?

Futurists and science fiction authors often give us overly grim visions of the future, especially when it comes to the Singularity and the risks of artificial superintelligence. Scifi novelist David Brin talked to us about why these dire predictions are often simplistic and unreasonable.

Illustration: Jim Cooke

Our civilisation faces no shortage of risks in the foreseeable future, from the devastating effects of climate change to the myriad number of technological hazards set to appear in the near future. Among these existential threats, perhaps none is more frightening than the prospect of artificial superintelligence. Its advent could knock us from our perch, forever delegating us to a secondary role, or worse, complete irrelevance.

Or, as David Brin argues, we just might be able to prevent such a disaster from occurring in the first place. The key, he says, is to not get bogged down in the pessimism and nihilism that’s suddenly become fashionable. What’s more, he worries that we’ve bought into overly simplified visions of the future; we like to talk about the destination, but rarely do we talk about the journey.

Brin, a futurist and author of such novels as Startide Rising and Existence, recently joined me in an email exchange where we discussed these topics.

George: Before we get to your views on building safe artificial superintelligence, let’s talk about the late Michael Crichton, a fellow scifi author. You’ve been an outspoken critic of his work since I can remember. What is it that you find so problematic about his cautionary tales? And how would you compare his pessimism to popular conceptions of technology and the future?

David: First off, I liked Michael and wish he had lived much longer, to keep infuriating me while pushing the sales and box office for sci fi to stratospheric heights. Indeed, I started modelling one of my characters, in Existence, after him, long before I knew he was ill. (One sign of respect: his character is the one who “gets-the-girl.”)

Are We Overthinking The Dangers Of Artificial Intelligence?

On the other hand, I did find the basic pattern of his craft to be simplistic and repetitive. It always – without break – consisted of “here’s another area where scientists want to take us, picking up tools that ought to be reserved to God. Such hubris will be punished!” Oh, and after the climax, as the book or movie ends, everything gets restored to status quo (except the dead). Nothing – especially society – is ever allowed to change.

I attended the infamous speech Crichton gave at a meeting of the AAAS (American Association for the Advancement of Science) in which he spent an hour essentially repeating: “I don’t hate science at all! I love science! Hey, they’re just stories!”

Well, no, they aren’t just stories. If you count the number of pages in which Crichtonian characters rail against technological overreach, it soon becomes clear that tales like Jurassic Park and Prey and Westworld are propaganda. Nothing wrong with that! Dire warnings are important. It is possible to make awful mistakes, as we charge into the future. Another dyspeptic grouch — Dr. Jared Diamond — scared sense into millions with Collapse. George Orwell and others showed — the highest form of science fiction is the self-preventing prophecy.

But there is a difference between a useful cautionary tale and cookie-cutter dystopias. The latter do not offer useful warnings, only cliches.

Alas, Crichton’s plots were never really about scientific hubris. Every calamity he portrays happens because some pridefully arrogant techno feat is rushed in secrecy! Thus evading the criticism and reciprocal accountability that is the heart and soul of real science. His scenarios depend on scientists evading this process (and they do, sometimes!) Secrecy is the villain. I only wish more readers came away understanding that. Indeed, I doubt that Michael ever did.

George: Overly simplistic stories about dinosaurs and nanotechnology run amok are one thing, but artificial superintelligence (ASI) is poised to be a horse of a different colour. I recently posted an article on io9 about how ASI might eventually give birth to itself, the result of a recursively self-improving AI. My fear is that this burgeoning intelligence (or intelligences) could go on to destroy or seriously degrade human civilisation owing to our inability to understand or control it. After reading my article, you complained that I omitted the process issue — that I was invoking a Crichtonian science-goes-wrong scenario and that my analysis assumed a certain level of secrecy. You went on to add that, “Efforts to develop AI that are subject to the enlightenment process of reciprocal scrutiny and criticism might see their failure modes revealed and corrected in time.” I find this to be a very intriguing and encouraging idea, so I’m hoping you can elaborate on that.

David: The whole idea behind dire warning tales is that such ‘self-preventing prophecies’ might help us evade terrible errors, poking sticks ahead of us as we rush into the future, finding and discussing and avoiding the mine fields and quicksand pools and snake pits as we hurry toward a possible better era. Dire warnings that only repeat hoary cliches, or that portray good=pretty and evil has red, glowing eyes… these aren’t helpful. Nor are demigod Chosen One saviors. We’re not going to avoid disasters that way.

Indeed, when the most common lessons are “ALL your neighbours are all sheep, no democratic institution can ever be trusted, and science is always wrong,” such despairing tales do far more harm than good.

Are We Overthinking The Dangers Of Artificial Intelligence?

But sure, let’s go after failure modes! Some democratic institutions can go bad, or veer toward Big Brother, so lets have movies about that! Even a decent society might fail to adapt well to, say, instant genetic testing (Gattaca), or video game addiction (Existenz), or uneven access to surveillance technology (Enemy of the State), and so on. Such films and novels provoke discussion and course corrections.

That’s what happened in the 1980s when sci fi concerns about genetic research led to a moratorium under the “Asilomar Process,” when the biology community soberly appraised the risks and delivered a suite of procedures and best-practices… so they could aim for the win-win, both increased safety and care… and rapid scientific progress. And yes, science fictional warnings helped to make that happen!

We need that win-win process to work! I am involved in similar discussions right now, concerning SETI. Such open and reciprocal criticism and negotiation is what adults do, and it is the only way we will get both grownup concern about dangers plus the rapid progress we need, in order to save humanity and the world. The problem with Hollywood, and cable news and yes, much written sci fi, as well, is that the very notion of adult process is anathema! it is seen as a killer of what Hollywood needs most… drama! Fast-paced peril and pure heroes opposing pure evil!

Are We Overthinking The Dangers Of Artificial Intelligence?

Hence, every time you see an alien or an AI, it is either out to get us, or else in danger from our own government. Well, there have been exceptions. Lucy and Her were flicks that tried to evade such cliches. But it is rare.

Hence the irony. We will watch AI very carefully, having been shown the potential downside repeatedly in films. Mayhaps that will help us to avoid the worst (or at least most cliched) failure modes? Ah, but then there is the rub… those nascent AIs will have watched all our dire warning films! And what might that suggest to them?

George: Just to be clear, I tend to not base my predictions on scifi, though there are times when the genre can be extremely illuminating. Rather, when I do my foresight work — like trying to figure out how and why an ASI might destroy us — I employ an analysis that assumes a kind of low regulation, low base scenario. I take the pessimistic, and admittedly unrealistic view, that nothing (or very little) will be done to address current technological trends and their ultimate manifestations. The resulting analysis, which may sound doom-and-gloom, has the same intention of so-called scare-mongering scifi — it’s intended to prompt discussion and facilitate action such that the prediction will not come true.

David: I think your overall approach, which is to ponder: ‘what if no one acts to deal with looming problems’, is of course one of the important thought experiments. We have certainly seen, already, that humanity can stare an onrushing dilemma right in the face and, like a deer in the headlights, do nothing till it’s too late. So it was with the rise of Hitler. So it was with the 300 year tobacco addiction. So it was, with 6,000 years of the filthy habit of inherited hierarchy and feudalism. So it appears now to be, with the cult of climate change denialism.

At the opposite extreme are examples of human societies acting with alacrity and determination. No one, in 1980, would have imagined that every species of whale would still be around — their numbers still increasing — in 2014. The ozone hole problem demanded less sacrifice by vested interests than dealing with the greenhouse effect will, so we simply went ahead and fixed it! When genetic engineering started scaring everyone, thirty years ago, biologists called a moratorium and met at Asilomar to thrash out a set of Best Practices that has worked astonishingly well, allowing us to both have rapid science and much more confidence in laboratory safety.

So which will happen with the rise of AI? Isaac Asimov, in his Robots Books, foresaw a worried public demanding fierce safeguards, so that the famous Three Laws were embedded into the basic architecture of positronic brains, so deeply and thoroughly that they could never be torn out. Barring some traumatic event, I don’t see that kind of relentless attention to safeguards arising, in today’s pell-mell infotech industry. But some good minds are exploring how it might be done.

George: You say that secrecy is anathema to the development of safe technologies, and I wholeheartedly agree. As you’ve pointed out for years, open societies are conducive to criticism and error correction, and they diminish the tendencies for societies and institutions to become inefficient, corrupt, or self-serving. But this isn’t necessarily where I see our society headed. Sure, surveillance technologies are increasingly stripping us of our privacy, but corporations and military organisations are becoming more secretive than ever.

David: I am accused of being too moderate and pragmatic. But I am fiercely and militantly moderate! Dogmas of both left and right seem lobotomizing to me and we should be way more multidimensional, by now, than a single silly “axis” metaphor.

I refuse to go into a froth or panic, because corporations and governments know about me — especially since nothing on Earth will prevent elites from seeing. But we have to become ferociously determined that we will be empowered to look back! If the public has the means — and habits — of sousveillance, protecting whistle blowers, for example, then all future conspiracies will have to remain small, because they will be able to trust only a few shadows and a few henchmen at a time.

Are We Overthinking The Dangers Of Artificial Intelligence?

That is what transparency means. Not an end to all shadows… or an end to privacy… but a growing sense that we can catch voyeurs and peeping toms with our own cameras and make them back off. That elites will tread carefully, because any abused person might cry out attention from the world. It won’t be equal — life never was — but we may be able to preserve the gains of the Enlightenment Experiment, and maybe advance them further.

George: I know you’re familiar with the work that DARPA is doing, along with the extremely well-funded efforts of companies to develop amoral and predatory Wall Street AI trading programs. What do you have to say about this — and the frightening prospect of having to live alongside ASI in perpetuity — a highly malleable, dynamic, and diverse existential threat.

David: Our prospects depend on which of the six general categories of AI methodology will actually bring artificial intelligence into being. In Existence I describe number six, which gets the least attention, even though it is exactly the one approach that we know of, that ever made intelligent beings. Us. And that method is lengthy childhood, interacting physically with the real world.

If that turns out to be the one that works (and after all, it has worked ten billion times during the last million years), then there is a real chance for a “soft landing.” That AI beings will have to spend years in small robot bodies, fostered into human homes. And by the time they achieve autonomy, they will think of themselves as human beings — who happen to be built of silicon and steel. And who, despite adolescent rebellion, still wind up loving (and not stomping) mum and dad.

We can do that. Foster (and love) new intelligences. We know how to do that.

Follow me on Twitter: @dvorsky


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.