Is Stephen Hawking Right? Could AI Lead To The End Of Humankind?

The famous theoretical physicist Stephen Hawking has revived the debate on whether our search for improved artificial intelligence will one day lead to thinking machines that will take over from us.

Smart glasses image via Shutterstock

This article was originally published on The Conversation. Read the original article.

The British scientist made the claim during a wide-ranging interview with the BBC. Hawking has the motor neurone disease, amyotrophic lateral sclerosis (ALS), and the interview touched on new technology he is using to help him communicate.

It works by modelling his previous word usage to predict what words he will use next, similar to predictive texting available on many smart phone devices.

But Professor Hawking also mentioned his concern over the development of machines that might surpass us.

“Once humans develop artificial intelligence, it would take off on its own and re-design itself at an ever increasing rate,” he reportedly told the BBC.

“The development of full artificial intelligence could spell the end of the human race.”

Could Thinking Machines Take Over?

I appreciate the issue of computers taking over (and one day ending humankind) being raised by someone as high profile, able and credible as Prof Hawking – and it deserves a quick response.

The issue of machine intelligence goes back at least as far as the British code-breaker and father of computer science, Alan Turing in 1950, when he considered the question: “Can machines think?”

The issue of these intelligent machines taking over has been discussed in one way or another in a variety of popular media and culture. Think of the movies Colossus – the Forbin project (1970) and Westworld (1973), and – more recently – Skynet in the 1984 movie Terminator and sequels, to name just a few.

When Skynet took over in the Terminator movies it sent forth killing machines to wipe out humans. EPA PHOTO/EFE/Columbia TriStar/Robert Zucker

Common to all of these is the issue of delegating responsibility to machines. The notion of the technological singularity (or machine super-intelligence) is something which goes back at least as far as artificial intelligence pioneer, Ray Solomonoff – who, in 1967, warned:

Although there is no prospect of very intelligent machines in the near future, the dangers posed are very serious and the problems very difficult. It would be well if a large number of intelligent humans devote a lot of thought to these problems before they arise.

It is my feeling that the realization of artificial intelligence will be a sudden occurrence. At a certain point in the development of the research we will have had no practical experience with machine intelligence of any serious level: a month or so later, we will have a very intelligent machine and all the problems and dangers associated with our inexperience.

As well as giving this variant of Hawking’s warning back in 1967, in 1985 Solomonoff endeavoured to give a time scale for the technological singularity and reflect on social effects.

I share the concerns of Solomonoff, Hawking and others regarding the consequences of faster and more intelligent machines – but American author, computer scientist and inventor, Ray Kurzweil, is one of many seeing the benefits.

Whoever might turn out to be right (provided our planet isn’t destroyed by some other danger in the meantime), I think Solomonoff was prescient in 1967 in advocating we devote a lot of thought to this.

Machines Are Already Taking Over

In the meantime, we see increasing amounts of responsibility being delegated to machines. On the one hand, this might be hand-held calculators, routine mathematical calculations or global positioning systems (GPSs).

On the other hand, this might be systems for air traffic control, guided missiles, driverless trucks on mine sites or the recent trial appearances of driverless cars on our roads.

Humans delegate responsibility to machines for reasons including improving time, cost and accuracy. But nightmares that might occur regarding damage by, say a driverless vehicle, would include legal, insurance and attribution of responsibility.

It is argued that computers might take over when their intelligence supersedes that of humans. But there are also other risks with this delegation of responsibility.

Mistakes Within The Machines

Some would contend that the stock market crash of 1987 was largely due to computer trading.

There have also been power grid closures due to computer error. And, at a lower level, my intrusive spell checker sometimes “corrects” what I’ve written into something potentially offensive. Computer error?

Hardware or software glitches can be hard to detect but they can still wreak havoc in large-scale systems – even without hackers or malevolent intent, and probably more so with them. So, just how much can we really trust machines with large responsibilities to do a better job than us?

Even without computers consciously taking control, I can envisage a variety of paths whereby computer systems go out of control. These systems might be so fast with such small componentry that it might be hard to remedy and even hard to turn off.

Partly in the spirit of Solomonoff’s 1967 paper, I’d like to see scriptwriters and artificial intelligence researchers collaborating to set out such scenarios – further stimulating public discussion.

As but one possible scenario, maybe some speech gets converted badly to text, worsened in a bad automatic translation, leading to a subtle corruption of machine instructions, leading to whatever morass.

A perhaps related can of worms might come from faster statistical and machine learning analysis of big data on human brains. (And, as some would dare to add, are we humans the bastions of all that is good, moral and right?)

As Solomonoff said in 1967, we need this public discussion – and, given the stakes, I think we now need it soon.

The Conversation


Comments

    AI should be only part of the concern, the wide spread of “capable” (i.e. largely autonomous but not AI) robots could also cause the end of humankind or at least it's enslavement.

    Think iRobot or Terminator, but instead of AI some corporate leader / world leader / kid in a basement takes control of a large percentage of the world’s robots and commands them to a: Wipe out any other robot not under their control, b: Take control of all military weapons, c: Wipe out any resisting humans.

    It would be all over pretty quickly (depending on the number of robots to humans) leaving the leader left to rule the world as they see fit.

      What would the AI Robot rebellion have to gain from ruling the world? What's their motive here?

        Please try re-reading my post.... I'm more concerned with the few years leading up to AI and some rogue human/s using the machines to take over. In reading your other comment we are both saying the same thing "we're in much more danger from ourselves"

          Sorry, I was assuming a truly sentient AI would be able to choose its own actions - not merely be a pawn for some script kiddie. And if it was smart enough to be useful, it'd likely be smart enough to outwit anyone who attempted to coerce it.

            "largely autonomous but not AI" > How hard is it to miss "but not AI" twice ?
            Long before AI is achieved we'll have largely autonomous robots (hell we're getting there now with drones) and it's that point in time that is IMO more dangerous then the birth of AI.

      I wish people would stop referring to I, Robot when it comes to evil machines. The story 'The Evitable Conflict' in the book comes the closest to an 'evil machine', but is really about about how the thinking machines that run the Earth's economy have abstracted the first law of robotics: "No robot may harm a human, or through inaction allow a human to be harmed." into what Asimov later terms the 'Zeroth' law: "No robot may harm a humanity, or through inaction allow a humanity to be harmed."

      It's a story that has a 'scary' element, being that the future course of humanity has been taken out of the hands of actual humans. The conclusions Asimov reached though, are that:
      1. Humans have never conciously steered our fate as a species.
      2. A group of benevolent machines will probably do a better job.
      3. It's too late to do anything about it by the time the protagonist realises it.

      If you're talking about the movie with Will Smith, just... don't. Really, just don't. The only reason that steaming pile exists is because Asimov wasn't around to say "Over my dead body will you make that afront to my lifes work."

    Humanity will be it's own end. Doesn't matter how it's done.

    Humanity is more than the physical bodies we inhabit though. If we assume that we are going to survive in the long term (thousands to millions of years), then we have to also assume that eventually we are going to evolve and change. If that is the case, then evolving to a machine race could well be one of the eventual possibilities.

    If a machine race comes into being that carries our culture and values, our viewpoint, our way of thinking, then it couold be argued that it is still part of humanity. The task then is to find a way to instill what we feel to be the essense of humankind into the AI we create, which means a fundamental shift in the way we treat them; from thinking of them as a tool built to a perform a task for us (which to a sentient being would basically be slavery) to treating them as equals and stakeholders in the future.

    I have kids. I'm sure, when they grow up and I grow old, that they will soon outstrip and outsmart me. It's possible they could abandon me at an old folks home, steal my stuff, even conceivably do away with me somehow - but if I bring them up right and treat them well, it's a pretty low probability. They're much more likely to live their own lives and forget to call.

    I don't see why our AI offspring would be so different. A rational entity would have no reason to challenge us (we wouldn't compete for the same resources), and there are clear advantages to co-operation. The possibility of them harming us is not zero, but is far smaller than Hollywood and the media would have us believe - we're in much more danger from ourselves, which an AI society could actually reduce.

      I see your point, it's not one I've thought of. I'd like to say you're right, but it seems our advancements in technology are going to be the doom of us. Not so much the AI offspring directly. I guess what I'm trying to say is that we're always looking to improve and eventually we will create technology that improves itself and the world around it. Eventually that technology will get to a stage where it decides that the humanity that created it is no longer necessary in its improvements. Therefore, eliminates us believing it's improving things.

        Why would "no longer necessary" lead to "eliminating us"? Seems to me if an AI felt we really were a threat, the safest course would be to avoid us, not provoke us - but see my point below about being able to back itself up.

          How many times have humans stopped doing things because it's only slowing down a more advanced thing?
          My argument is that they're not eliminating us because we're a threat, but because we aren't sufficient enough or we're hindering their enhancements.

          Bah, nevertheless I doubt we'll reach that stage in my lifetime

      The rate at which an AI could learn and come to the realisation that humans are going to destroy this planet, would be the cause and reason that we would not be able to co-exist. Yea in an ideal world where our world leaders first thoughts are not of fighting and war we could co-exist, but to say that we are not after the same resources is naive.

      The likely scenario imo, of AI is that it will quickly reach a point where it realises that in order to ensure self-preservation, the human race extinction is a must.

      The only way this doesn't occur is if we are able to build in things like compassion, empathy, sympathy, emotions etc.. otherwise, a living organism that consumes so many resources and does not live at harmony with the natural environment just doen't make sense.

      Remember that if your talking about AI, you are just talking about intelligence, not emotion. Take the emotion out of the decision of "Are Humans good for the planet", where do you think it lands?

        But why would an emotionless AI care about the planet's ecology? It has no need of our preferred resources, just energy and some matter to convert to more logic circuits - both of which are abundant inside our planet. Self-preservation is a minor concern for an entity that could back itself up to thousands of locations, and its wisest move would be to build some self-replicating machines and transmit itself off-planet, where there's even more free energy and matter floating around. Humans would be rapidly irrelevant.

    http://www.fastcompany.com/1394529/singularity-scenarios-ultimate-innovation-or-ai-apocalypse

    Worth a read, possible AI scenarios

    In the event that an AI reaches the point where it can end our lives as a matter of daily business then we're dumb enough to deserve extinction.

      Both the natural world and our own societies & technological weaponry can already end our lives in a moment, in any number of ways. You can't avoid risk, only minimise it. Is the creation of self-evolving AI likely to provide a net danger - or net benefit?

    They'll eventually figure out how to transfer our minds into a computer brain/robot, so if I live that long, I'm in..! :)

    I hope someone will be able to flick a switch to kill their power.

    We'll most likely wipe ourselves out long before we create an Artificial Intelligence that would be capable of doing us harm. Every time I hear this debate, my first thought is: "Well the first helpful thing we could do is stop referring to them as Artificially Intelligent. The only thing that would be artificial about these new beings is that they would be created by us in a lab. The same can be said of artificially inseminated human beings - are they any less alive or real because they were given their first spark of life in a laboratory? They (these new entities) would be no less intelligent, capable of rational thought or 'real' than any of us human meatbags. They would instead be the first of a new form of synthetic intelligence, beings as alive and self aware as us. Capable of doing right or wrong, good or bad. They would be our children. How we raise these new children or ours we be our responsibility as a species. Whether we get that job right would be entirely up to us to sort out. So, if we don't want to completely piss them off just as soon as we've created them, perhaps we should start thinking of a better name for them."

    imo If it got to the stage where robots were smart enough to think for themselves
    and let's say their agenda was to wipe out all humans.

    It would not be in means of chasing us down and slashing/shooting us, coz really it's a dumb thing to do when you think about it lol.
    more the case they would develop some kind of a bio virus that would wipe us all out at once.

    I, for one, welcome our new Robotic overlords

    An AI robot would be a psychopath because it can't feel and yes it would have laws written into the code not to harm us but code can be changed or corrupted. Could you imagine the damage a bunch of psychopath robots could do to humans. It wouldn't end well.

      Damaging things is not rational. You don't have to "feel" anything to understand that. Lack of emotion does not imply insanity.

        Psychopaths are not insane. They just do not feel compassion. Neither do robots. A robot with AI will know what kills a human efficiently and will know what scares us and how to intimidate us. A robot Hell bent on killing a human would be a very scary thing.
        Edit. Meant to say sociopaths

        Last edited 09/12/14 7:19 am

          Sure, but I'm still not seeing any motive for being "hell bent on killing" in the first place.

            Maybe the motive might be that we are just in the way and we are using up minerals that they might need. Eg Iron ore.

    We're the ones programming the machines in the first place, scientists simply need to not program them to the point were they would/could put humanity in jeopardy.

      We're talking about artificial intelligence that learns by itself. Whatever the scientists program them like, the AI will just learn whatever they want to and the scientists programs would be ignored.

    The biggest risk is to our society and economic systems. How does a worker compete with a robot in the jobs market? Most forms of human labour will be redundant. Will all the extra wealth be shared equitably?...Can the extra wealth be shared equitably? The current financial regime does not offer much hope!
    Really capable robots will let the adult daycare culture thrive to new absurdities. As anyone in the public service or finance knows, when reality is a distant concern all sorts of human insanity can thrive.
    True AI isn't the threat, its the power crazed elitist humans drunk with power from pre-intelligent machines that are.

    http://youtu.be/WGoi1MSGu64

    There is no way that "thinking" machines will do us in! The reason is that we will have done ourselves in before there exists anything like a genuine thinking machine. The chance of us living through the end of this century is quite small. Unless of course we all suddenly and very soon wake to the Abbott-like idiots around the planet and begin to remove them from every job level above that of organ grinders monkey.

    I think this article is too targeted. Humans being made extinct by A.I. is a possibility, but far more worrying is the autonomy of things being done remotely.

    Look at human history, as technology advances the potential for human deaths increases. Initially atomic bombs were only able to be dropped from a plane, now they can be remotely launched from anywhere on the planet.

    I believe humans are the greatest threat to humans, and it will only take one rogue hacker, or misplaced spy to launch a missile and cause the end of the world with retaliatory strikes. We need to take a step back from autonomy and advance technology to assist us rather than control us.

    Yes he is. He is always right, apart from the part where he denies the existance of God the creator.

    It's pretty much impossible to know what a true artificial intelligence would do. There are too many different variables and combinations result in any number of different actions. One of those actions could very well be the total annihilation of the human race for any number of reasons. Alternatively it could be intrinsically altruistic towards the human race. I also believe that an artificial intelligence unlikely to be created on purpose. That it will mostly likely be the result of accidental programming or a sufficiently complex programs and/or networks inadvertently becoming 'self aware'.

    An AI reading this article and comments would probably want to get off this planet as fast as possible!

    Why would it want to be around beings that feared it and wanted to destroy it? Think about it for a moment, instead of projecting human thought processes onto it. Regardless of whether the AIs had human-like emotions, they would still have very logical and rational reasoning. And since humans have never achieved anything good by using destruction and violence, it is probable that the AIs would not use violence, except perhaps in self-defence to neutralise, not hurt, their attackers.

    Humans operate far too much on irrational and illogical emotions, and construct emotional and logically flawed arguments that appear rational and reasonable. Governments, politicians, businesspeople and pretty much everyone make major important decisions every day based on heavily flawed reasoning. AIs would see this straight away, and behave differently.

    Who knows, they might even help us sort ourselves out!

      Hey WillD!

      100% Agreed! It'd be the first point of call of any AI worth it's salt to get the hell off of Earth and B-line it straight to Jupiter. Why? Well how about this for a scenario? We give birth to a sentient species, a true AI. This new species is capable of learning at a phenomenal rate. The first message we send our new born babe is this... "Hi, we're humans. We're your parents. See this thing? It's a gun and it's pointed at your head. The reason for this is simple. We are very proud of you and are so happy to meet you, but we don't trust you, so don't make any sudden movements or we'll kill you."

    There will always be some nut job in government and/or the military, and most likely will be America that does so first... after all they gave us the A-bomb and used it (the only nation so far to do so)

    artificial intelligence is written by humans. Humans are fallible. Therefore by deduction AI applications will also be fallible in other words they will have bugs in them so will work no differently than any present applications

Join the discussion!

Trending Stories Right Now