Elon Musk Is Wrong To Think He Can Save The World By Boosting Our Brains

Tesla and SpaceX CEO Elon Musk has announced a new venture called Neuralink, a startup which aims to develop neural interface technologies that connect our brains to computers. Musk says it's the best way to prevent an AI apocalypse, but it's on this point that he's gravely mistaken.

Image: AP

As reported in The Wall Street Journal, the startup is still very much in its embryonic stages. The company, registered as a "medical research" firm, is seeking to pursue what Musk calls "neural lace" technologies, which presumably involve the implanting of tiny electrodes in the brain to create a connection with a computer. The resulting "direct cortical interface" could be used to upload or download thoughts to a computer, blurring the boundary between human and machine. Eventually, brain chips could be used to supplement and boost cognitive capacities, resulting in increased intelligence and memory. It's super-futuristic stuff, to be sure -- but not outside the realm of possibility.

According to the WSJ, Musk is funding the startup and taking an active leadership role within the company. Several leading academics in the field have reportedly signed up to work at the firm, and Musk has apparently reached out to Founders Fund, an investment firm started by PayPal co-founder Peter Thiel. The Neuralink website currently consists of a logo on a single page, with an email address for those seeking employment. Yesterday evening, Musk confirmed the existence of the startup via a tweet, adding that more details will appear next week via WaitBuyWhy, a site that conveys topics with simplistic stick figures.

Neuralink now joins Tesla, SpaceX and other highly ambitious, futuristic-sounding ventures spawned by Musk. It's too early to tell if he'll be able to achieve his lofty goals through Neuralink, but given Musk's dedication to his other projects, it's safe to say he'll give it the royal try. Initially, Neurolink will likely develop technologies to treat brain disorders such as epilepsy, depression and Parkinson's, but it could then move to develop technologies specific to neural interfacing and cognitive enhancement. Musk is not alone in this emerging field, and he will have to compete against Kernel, a $US100 million ($131 million) startup founded by Braintree founder Bryan Johnson, and Facebook, which recently posted jobs for "brain-computer interface engineers". US government research arms like DARPA are also working to develop brain-implantable chips to treat mental illness and neurological disorders.

The neuralink logo, and an email address for job seekers. (Image: Neurolink)

Making money is certainly a motivating factor for Musk, but if his intentions are to be believed, he's also doing this to prevent humanity from a potential game-ending catastrophe. As Stephen Hawking and many others see it, greater-than-human artificial intelligence, also known as artificial superintelligence (ASI), represents an existential risk. These thinkers believe that either through error, indifference or deliberate intention, an ASI could annihilate our entire civilisation. As Musk apparently sees it, a possible prescription to this problem is to enhance humans alongside AI to ensure that we're able to counter any threats before they emerge, and to keep up with AI if we're to avoid being subjugated or destroyed. As Musk warned at a conference last year, "If you assume any rate of advancement in [artificial intelligence], we will be left behind by a lot."

Interestingly, advances in neural interfacing and cognitive enhancement could progress faster than artificial intelligence. The reason is that we already have a decent functional model to tinker with: The human brain. AI researchers, on the other hand, are trying to build brains from scratch. In the effort to create greater-than-human levels of intelligence, the race could very well be won by those seeking to improve pre-existing human brains, a process known as radically amplified human intelligence.

Humans With Amplified Intelligence Could Be More Powerful Than AI

With much of our attention focused the rise of advanced artificial intelligence, few consider the potential for radically amplified human intelligence (IA). It's an open question as to which will come first, but a technologically boosted brain could be just as powerful -- and just as dangerous -- as AI.

Read more

This all sounds fine and well in theory, but it may all prove easier said than done. Musk’s team, or whichever firm sets down this path, will have to find a way to safely and effectively implant these chips. They’ll also have to find volunteers who are willing to undergo invasive brain surgeries, and who are comfortable with chips permanently attached to their brains. Before we even get to this stage, however, this initial burden will likely fall upon animals, which opens the door to some potentially frightening and ethically-fraught scenarios, including cognitively enhanced, or uplifted, animals.

The researchers will also have to ensure that people who undergo these brain-augmenting procedures will still be psychologically stable and capable of functioning in the real world. A moderate enhancement could result in a very Einsteinian-like personality, but boosting a person’s brain thousands or millions of times will assuredly result in something bearing no semblance to a human being. This so-called posthuman might relate to us the same way we relate to bacteria. An enhanced human could also be completely insane.

And herein lies the rub: By creating enhanced humans to counter ASI (an ethically problematic idea unto itself, as humans cannot be treated as a means to an end), we’re essentially creating the exact problem we’re trying to solve. There’s no guarantee that a radically cognitively-enhanced human will be safe, containable or have goals and motivations compatible with human interests. These posthumans may come up with their own goals, or break-off into self-interested groups, each with its own agenda. Rivalries and technological arms races could emerge at levels greater than ever before.

There’s no question that ASI is a serious problem in the making, and we should most certainly be thinking about ways to mitigate this still-hypothetical threat. Thankfully, there are many other ideas out there that don’t involve the creation of yet more super-advanced entities, such as limiting the ability of AI to modify itself beyond certain bounds, or creating “friendly” AI incapable of harming humans.

For now, these sorts of speculations may be premature. Musk, like his rivals, still need to get his venture up-and-running, and prove the feasibility of his business plans. Musk may be the king of publicity, but given the slow pace of development at SpaceX and Tesla, it could be a while before this forward-looking entrepreneur sees any returns with Neuralink.

[Wall Street Journal]

WATCH MORE: Science & Health News


Comments

    WHY IS HE WRONG
    YOU NEVER ANSWERED THE QUESTION IN THE TITLE OF THIS ARTICLE
    CLICK BAIT

      Precisely! I saw that headline and was preparing myself for a rant, read the article and what? nothing.. WTF Giz, I realise Click Bait is the norm now, but at least show something in the article that even glancingly reflects the headline!

      On a different note, dude, how did you get that font and size of text? That's pretty cool, says the old fart who still struggles with Android on his phone. :)

        Apologies, a technical error on our end cut off the end of the article! It's been updated.

          Cool, now the article makes more sense.
          BTW, sign me up for the brain augmenting once it is stable, as I am more than happy to go post human.

      A technical error cut off the back half of the article. It has been updated.

    Looking for a counter argument and get a clickbait headline instead

      A technical error cut off the end of the article - I've updated it.

    Still need that report article button...

    On the flip-side, I've seen so many negative posts regarding his new venture.
    There are two ways I look at it;
    #1 - Elon is against AI. He hates it, he believes it will take over the world... So his natural reaction, create it himself?
    #2 - Advanced Learning. Think of it... Being able to download knowledge and skills, directly bypassing the long arduous learning process.

      I think it's just the direction the band wagon is heading.
      Everyone loved uber and now hates it.
      Now it's musks turn.
      Trump though, that hate seems eternal.

    Well I say he is dead on the money , the only way we "humans" are going to compete with technology is to be part of it.
    Imagine being able to pursue knowledge that you deem relevant and not some bygone curriculum, the possibilities are staggering.

    "Hmm... I have an idea for an article - I shall say someone is wrong in the title then give multiple valid reasons why said person is not wrong in the body. That'll confuse my readers."

    The ONLY way we'll avoid becoming slaves to the machines is to also become machines. It's natural human evolution. Plus - who wouldn't want to upload their brain to the cloud and live forever?

      A technical error cut off the end of the article. It has now been fixed.

    Did they cut off the rest of the article, or did it really just end there? Because there's zero connection between the title and the article.

    C'mon Giz AU, why post this here?

      Spot on! I've fixed it - the complete article is up now.

    Will AI be a cure for Racism?

      Just wait. Until somenes brains made by a different manufacturer.
      It'll be intel vs amd on a whole new level.

      I doubt it, but wish it would, there will be always those people who just hate and want to control others.
      Maybe those people will only be a few, and the masses will know who they are.

    Until we have reliable filters for SPAM and "alternative facts" this is just asking for trouble! Don't connect your brain to the Internet.
    On the other hand; connecting to devices will be extremely useful.

      Changing the volume or channel just by thought. No need to say xbox on.

    Scare mongering about something that hasn't even been defined yet, is the what's wrong. They aren't going to be dragging people off the street and shoving microchips in their brain. This will be a slow process and ye they will probably start with animals, in fact, they are already doing it. Have you seen what's happening to animals right from the beginning? It may be morally wrong, but how else are they going to start? Let's get some actual facts before we start panicking, personally I'm all for it, the faster the better, hell I'll even volunteer.

      It wouldn't surprise me if they take this is different stages, and I don't mean the testing on animals/humans etc. I mean the nature of the augmentation. It wouldn't surprise me if the first stage of human augmentation is actually not to make "smarter" humans but to assist people with problems - whether it's nerve damage or sensory problems, speech problems and so on.

      I'd think work to augment normal, healthy humans would be some time after that. What that means is you'll have no shortage of willing volunteers for the initial (scary) testing. Because the potential benefit could be huge, people walking again, or controlling parkinsons, recovering from Alzheimers and so on. And that benefit for a sufferer would in many cases outweigh the (admittedly terrifying) risks.

    When it comes to being part of technology, then the questions are "Do we get 100% integrated with technology or only parts of us get integrated?" If we start to integrate ourselves with technology just a little bit, we will eventually fully integrate ourselves with technology and then the human species will no longer exist. We would be something else entirely, we would be or something very close to the Cybermen in Doctor Who. I think that the way that Isaac Asimov looked at AI is a better approach, in that we implement a version of the laws of robotics so that they will never pose a risk to humanity.

      Evolution isn't always natural. It is often forced. So this wouldn't be natural but it is a means of progressing the species.

        Ah but gman, all evolution brought about by our acquisition of technology (which came about by our brain size etc) would still be natural. Even when we first learned how to use a hammer (very early tech) we began to increase our chances of survival in the closed loop system of earth (my definition of natural, we are a part of that system). Our technology and tools, are just an extension of ourselves. We still aren't sure if this change will be an advantage in our environment or not, it could be detrimental, I doubt that, but, I have no crystal ball. The only way we would evolve unnaturally would be if "ET" came down to earth and helped us out. ;)

          I used the term Un natural for things where man has made the evolution happen, selective breeding, or moving species out of natural habitats that then have brought in an evolutionary change.

            and yes the list of these is certainly growing. Understand.
            To the point where its generally accepted we have stopped "naturally" (environment shaping) evolving. But if we die out, and a machine species replaces us (certainly probable from a current tech level) it will be very very natural, we will become one of the all too common 95% thats gone.

              Can't argue that.
              Even though it is very sci-fi I do see it as being a plausible reality.

    Neuralink plans to increased intelligence and memory, instead gets Neuralink Netflix, Tinder and Call of Duty

    "the slow pace of development at SpaceX and Tesla"? Can you name any organisation developing significant technology faster?

    This would be like the birth of the internet all over again. With so many branches of tech reaching out you could not keep up.. or maybe you could with the right type of upgrade!

      I can't wait for a beta vs vhs type scenario. Or hd dvd v bluray.

Join the discussion!

Trending Stories Right Now