Tesla and SpaceX CEO Elon Musk has announced a new venture called Neuralink, a startup which aims to develop neural interface technologies that connect our brains to computers. Musk says it's the best way to prevent an AI apocalypse, but it's on this point that he's gravely mistaken.
As reported in The Wall Street Journal, the startup is still very much in its embryonic stages. The company, registered as a "medical research" firm, is seeking to pursue what Musk calls "neural lace" technologies, which presumably involve the implanting of tiny electrodes in the brain to create a connection with a computer. The resulting "direct cortical interface" could be used to upload or download thoughts to a computer, blurring the boundary between human and machine. Eventually, brain chips could be used to supplement and boost cognitive capacities, resulting in increased intelligence and memory. It's super-futuristic stuff, to be sure -- but not outside the realm of possibility.
According to the WSJ, Musk is funding the startup and taking an active leadership role within the company. Several leading academics in the field have reportedly signed up to work at the firm, and Musk has apparently reached out to Founders Fund, an investment firm started by PayPal co-founder Peter Thiel. The Neuralink website currently consists of a logo on a single page, with an email address for those seeking employment. Yesterday evening, Musk confirmed the existence of the startup via a tweet, adding that more details will appear next week via WaitBuyWhy, a site that conveys topics with simplistic stick figures.
Long Neuralink piece coming out on @waitbutwhy in about a week. Difficult to dedicate the time, but existential risk is too high not to.
— Elon Musk (@elonmusk) March 28, 2017
Neuralink now joins Tesla, SpaceX and other highly ambitious, futuristic-sounding ventures spawned by Musk. It's too early to tell if he'll be able to achieve his lofty goals through Neuralink, but given Musk's dedication to his other projects, it's safe to say he'll give it the royal try. Initially, Neurolink will likely develop technologies to treat brain disorders such as epilepsy, depression and Parkinson's, but it could then move to develop technologies specific to neural interfacing and cognitive enhancement. Musk is not alone in this emerging field, and he will have to compete against Kernel, a $US100 million ($131 million) startup founded by Braintree founder Bryan Johnson, and Facebook, which recently posted jobs for "brain-computer interface engineers". US government research arms like DARPA are also working to develop brain-implantable chips to treat mental illness and neurological disorders.
Making money is certainly a motivating factor for Musk, but if his intentions are to be believed, he's also doing this to prevent humanity from a potential game-ending catastrophe. As Stephen Hawking and many others see it, greater-than-human artificial intelligence, also known as artificial superintelligence (ASI), represents an existential risk. These thinkers believe that either through error, indifference or deliberate intention, an ASI could annihilate our entire civilisation. As Musk apparently sees it, a possible prescription to this problem is to enhance humans alongside AI to ensure that we're able to counter any threats before they emerge, and to keep up with AI if we're to avoid being subjugated or destroyed. As Musk warned at a conference last year, "If you assume any rate of advancement in [artificial intelligence], we will be left behind by a lot."
Interestingly, advances in neural interfacing and cognitive enhancement could progress faster than artificial intelligence. The reason is that we already have a decent functional model to tinker with: The human brain. AI researchers, on the other hand, are trying to build brains from scratch. In the effort to create greater-than-human levels of intelligence, the race could very well be won by those seeking to improve pre-existing human brains, a process known as radically amplified human intelligence.
With much of our attention focused the rise of advanced artificial intelligence, few consider the potential for radically amplified human intelligence (IA). It's an open question as to which will come first, but a technologically boosted brain could be just as powerful -- and just as dangerous -- as AI.
This all sounds fine and well in theory, but it may all prove easier said than done. Musk’s team, or whichever firm sets down this path, will have to find a way to safely and effectively implant these chips. They’ll also have to find volunteers who are willing to undergo invasive brain surgeries, and who are comfortable with chips permanently attached to their brains. Before we even get to this stage, however, this initial burden will likely fall upon animals, which opens the door to some potentially frightening and ethically-fraught scenarios, including cognitively enhanced, or uplifted, animals.
The researchers will also have to ensure that people who undergo these brain-augmenting procedures will still be psychologically stable and capable of functioning in the real world. A moderate enhancement could result in a very Einsteinian-like personality, but boosting a person’s brain thousands or millions of times will assuredly result in something bearing no semblance to a human being. This so-called posthuman might relate to us the same way we relate to bacteria. An enhanced human could also be completely insane.
And herein lies the rub: By creating enhanced humans to counter ASI (an ethically problematic idea unto itself, as humans cannot be treated as a means to an end), we’re essentially creating the exact problem we’re trying to solve. There’s no guarantee that a radically cognitively-enhanced human will be safe, containable or have goals and motivations compatible with human interests. These posthumans may come up with their own goals, or break-off into self-interested groups, each with its own agenda. Rivalries and technological arms races could emerge at levels greater than ever before.
There’s no question that ASI is a serious problem in the making, and we should most certainly be thinking about ways to mitigate this still-hypothetical threat. Thankfully, there are many other ideas out there that don’t involve the creation of yet more super-advanced entities, such as limiting the ability of AI to modify itself beyond certain bounds, or creating “friendly” AI incapable of harming humans.
For now, these sorts of speculations may be premature. Musk, like his rivals, still need to get his venture up-and-running, and prove the feasibility of his business plans. Musk may be the king of publicity, but given the slow pace of development at SpaceX and Tesla, it could be a while before this forward-looking entrepreneur sees any returns with Neuralink.