Given how much time Elon Musk spends trying to save the world, it's almost surprising that he thinks we're all just living in someone's video game anyway. Or, it would be if he wasn't well-known for his dramatic predictions and if he hadn't also added that there's a chance we'll become pets to superintelligent AI and so need to start figuring out how to physically merge with technology to save ourselves. Image: AP
Musk, the founder of Tesla and SpaceX (and maybe the inspiration for Robert Downey Jr.'s Iron Man) told attendees of the Code Conference he thinks there's a "one in billions" chance we're actually living in reality. Here's what he said, courtesy of Vox's Ezra Klein:
The strongest argument for us being in a simulation probably is the following. Forty years ago we had pong. Like two rectangles and a dot. That was what games were.
Now, forty years later, we have photorealistic, 3D simulations with millions of people playing simultaneously and it's getting better every year. Soon we'll have virtual reality, augmented reality.
If you assume any rate of improvement at all, then the games will become indistinguishable from reality, even if that rate of advancement drops by a thousand from what it is now. Then you just say, OK, let's imagine it's 10,000 years in the future, which is nothing on the evolutionary scale.
So given that we're clearly on a trajectory to have games that are indistinguishable from reality, and those games could be played on any set-top box or on a PC or whatever, and there would probably be billions of such computers or set-top boxes, it would seem to follow that the odds that we're in base reality is one in billions.
Tell me what's wrong with that argument. Is there a flaw in that argument?
Musk certainly isn't the first person to come up with the argument. As Klein notes, it's spelled out in a paper by Oxford philosopher Nick Bostrom who happens to be one of Silicon Valley's favourite philosophers and sources of thought experiments. Bostrom is famous for his research into so-called existential risk and his latest book, Superintelligence: Paths, Dangers, Strategies talks about all the ways that a powerful AI could take us all out.
Musk seems to have been inspired by Bostrom to fear the robot uprising, which he once tweeted was "potentially more dangerous than nukes". Late last year, he founded OpenAI, an AI research non-profit that wants to develop friendly, instead of harmful AI. This isn't to say that he thinks all AI is out to get us. At the conference, he hinted that there was really only one AI company -- he wouldn't name names but... probably Google -- that worried him.
But back to the simulation theory which, if true, might mean little of this matters anyway. As per Musk, future generations are likely to run simulations of people who lived in the past. (Probably true, especially given that we're already doing this at a very rudimentary level. For example, Stanford scientists have used simulations to track early human migration.) Because it is the future, the computers will be more powerful, and the simulations better. (Probably true.) If the simulations were good enough, then they would probably be conscious. (This is partly a philosophy question, but in theory possible). So if this is possible, what's to keep us from believing that it hasn't already happened?
Computer simulations aside, there are still dangers in the possibly fake world we live in. Musk was optimistic at points, saying that we're already cyborgs with superpowers, since part of us live online and our access to social media is beyond our ancestors' wildest dreams. More depressingly, he predicted that if we don't continue progressing quickly, we might become "like a pet or like the house cat" for AI. The best defence against this is to physically blend with technology into a "neural lace" that helps us control technology with our brains. Since basic brain-controlled interfaces do exist, and researchers have already injected a flexible circuit into a mouse brain, odd as it may sound, it's not out of the realm of possibility.