Will We Ever Be Able To Upload A Mind To A New Body?
In Altered Carbon, the body no longer matters. As one character quipped: “You shed it like a snake sheds its skin.” That’s because the human consciousness has been digitised, and can be moved between bodies – both real and synthetic.
The Netflix series takes place hundreds of years in the future, but references versions of technology that have been in development for years, like brain mapping, human and AI neural links, and mind uploading to computers. Millions of dollars has been bumped into technological ideas that promise, one day, our brains will be turned digital. That said, there are those who believe the human mind is too complex, and our consciousness too nuanced, to be recreated in a digital product. And none of that even goes into what would happen if someone’s digitised mind was placed into real human flesh.
Will we ever be able to upload our minds into other bodies? Furthermore, should we? And honestly, if we ever achieved such a feat, could we even call ourselves human anymore? On this week’s
Is this truly feasible, at least at some point in the future? At this point,we do not have a remotely complete picture of what features of the brain give rise to thinking, personality, sensations, etc. If the features involve microscopic, quantum phenomena, then a precise upload of you cannot be created, as there is a fundamental limit on what we can know about a quantum system. (See Heisenberg’s Uncertainty Principle). This would mean we you can’t really upload your mind. Sorry.
2. But suppose a computational duplicate of your brain can be created. And suppose uploading technology was perfected. Should you go to Mindsculpt? No.
Suppose, while at Mindsculpt, the uploading process does not involve destroying your biological brain. Wouldn’t you still be there, on the table, after your brain was scanned and “transfered” to a program? Why would your mind “shift” from your brain to the computer, leaving your still working biological brain there? This seems magical to me. A more reasonable hypothesis is that you are still on the table, and a program is created that specifies the workings of your brain. (I discuss this in more detail here.)
If this seems at least plausible to you, you definitely shouldn’t sign away your legal rights to an upload, or sign up for the kind of uploading that is likely to be developed (“destructive uploading”)! Destructive uploading destroys the biological brain in effort to measure its computational features. And nondestructive uploading may simply be a total waste of money, or worse. If the program was downloaded, maybe it creates a duplicate of you that lives in a computer simulation, or in a body like yours, trying to take your job or date your partner. After all, it will be convinced it is you. And you might have legal obligations to take care of it!
3. Finally, we have little sense of whether AI can be conscious. The jury is out. So if you aim to transfer your mind, it may be that your upload isn’t conscious — it doesn’t feel like anything to be them. This again suggests that the upload isn’t really you. (See David Chalmers.) (See: “Can a Machine Feel?”)
And we haven’t even delved into the question: what is a mind? To know whether you survive uploading, it would be important to have a sense of what a mind is. If the mind is just the brain, then, you do not survive. Some say the mind is a program. But a program, like an equation, is an abstract entity. An equation doesn’t exist anywhere, although inscriptions of it do. Presumably, your mind is a concrete thing, having a location. Perhaps you are a program instantiation — some thing, running a program (akin to a computer, in some sense). But what is that thing? This just brings us back to my original question: what is a mind?
Research Fellow at the Future of Humanity Institute at Oxford University
There are two problems with uploading our minds into another body, one philosophical and one technical.
The philosophical problem is whether this is a transfer of personal identity, some kind of cloning/copying making a new person with the same or different identity, or something else entirely. Many people think the answer is intuitively obvious and get very annoyed when others strongly disagree. Myself, I agree with the philosopher Derek Parfit who famously analysed similar cases (often involving Star Trek-like teleporters) in his book Reasons and Persons (1984): there is no true fact of the matter about who is the “real” continuation of the original person, what matters is at most psychological connectedness.
The technical problem is of course how to actually do it. Currently our minds emerge from or are our brain activity. We need some way of creating a brain that does the same. I have written a fair bit on “whole brain emulation”, the hypothetical future simulation of entire brains in software. That would involve scanning a brain (possibly destructively), reconstructing the neural network from the scan, and running the simulation on a suitable computer. In Altered Carbon this is achieved by having a cortical stack implanted, presumably constantly scanning the brain neural network using some form of nanotechnological fibre network.
There is a lot of information in a brain: about 100 billion neurons, each with about 8000 synaptic connections to other neurons we need to keep track of, and quite likely a several pieces of information for each synapse. To scan that you would need a 3D resolution of a few nanometres: actually doable with current microscope technology, albeit only for small (a few micrometres) and frozen/plastinated brain tissue. The connectivity and synapse information may run into maybe 10 petabyte; the actual 3D scan is far bigger. This, and running all the relevant electrochemical processes, may sound like an extremely tall order. Today, it is impossible. But it is relevant to remember that Moore’s law (in various forms) and science marches on – if things continue for a few decades this may not be too hard.
Scanning a living brain is likely much harder than scanning a neatly frozen brain since everything is moving about, there is an active immune system that tries to interfere, and the scanning method better not interfere with function. I think it is physically possible but likely much harder. We need not just great nanotechnology but also a fine understanding of how to interface brains to electronics on a truly vast scale: it is going to take much longer than getting the first uploads to work from frozen scans.
There is an extra issue in Altered Carbon, and that is the recipient bodies. These are either grown clone bodies or donor bodies, nearly totally organic. I can easily imagine (given the above assumptions of technology) how a computer running the brain software could control a biological body, but I have a far harder time imagine how to download a brain network into a recipient brain. Somehow we need to rearrange all the connections to correspond to the downloaded person. That is an extremely tricky thing even with mature nanotechnology, since many neurons stretch across much of the entire brain and now need to be re-routed. This is the part I definitely doesn’t believe can be realistic.
There is an obvious ethical issue when using donor bodies — what do you do with “homeless” minds? And many other issues easily come to mind: can you lose your right to have a body? Can you sell it? Rent it? Is it a bad thing that you can treat it as disposable? (The roleplaying game Eclipse Phase plays with many of these issues, from refugees who had to flee a disaster by uploading and now are software, over “the clanking masses” who cannot afford organic bodies and have to make do with shoddy robot bodies, to fancy designer-bodies for those who can afford them). But this does not really say anything about whether it is a moral thing to move between bodies, just that there are a lot of social context that matters. It is like discussing healthcare: how it is provided, to whom, what practice is allowed, mandatory and banned, all these things have huge ethical implications but doesn’t really tell us whether medicine itself is moral.
Some people would say the whole idea is wrong because it is against nature: humans are not meant to be immortal body-hoppers. But that something is natural does not mean it is moral or acceptable: we do fight cancer and cruelty, despite both being parts of natural life. A slightly more sophisticated version argues that human life is shaped by its mortality and other features, so a change would make us something not-human and hence it is not good for humans to aspire to it. But this by this argument monkeys should not seek to become humans enjoying art, science, religion, sport etc. since such higher pleasures are not monkey pleasures. This seems backwards to me: we can enjoy monkey pleasures too, and we have removed many of the limitations of being a monkey. Similarly being a potentially immortal body-hopper removes some pretty big limitations in life yet still allows us to limit ourselves if we so chooses. It is possible to turn off one’s stack.
Many like to say that it is the human limitations that make us human. But the world of Altered Carbon is full of limitations – just because people are potentially immortal doesn’t mean heartbreak, cruelty, oppression, faulty technologies and all the other bad things worth fighting against have disappeared. I suspect that no matter how advanced we become we will always bump into limitations that we will struggle with.
Some thinkers worry that if we enhance ourselves we will try to control everything in our lives. Everything of ourselves will be a potential object of design and engineering, and this both will make it less authentic and make us frustrated as we constantly tinker with it. There is some truth to this: we are suffering from a fair bit of “first world problems” today with our free and flexible lives (compared to our ancestors). But that just seem to mean we should culture the virtue of enhancing ourselves wisely and responsibly rather than not being able to enhance oneself.
What should we call ourselves?
Would it make sense to call oneself human if one is actually moving from cortical stack to cortical stack? I think so. Being human is about a particular perspective on the world, a human-style mind with its peculiar biases, motivation system, ways of thinking and feeling, and so on. A working mind transfer will transfer our human minds to whatever substrate can run them – pure software, a robot, a biological body – and that means that it will now at the very least house a human mind.
We can hope that this allow us to extend and improve our minds so we can properly call ourselves transhuman and maybe one day even posthuman, but I have a suspicion that even far-future superintelligences may still use the word “human” to denote what they are.
Neuroscientist and founder of Carboncopies Foundation
Probably, yes. For most scientists the default hypothesis is that everything about our mind and conscious awareness is an emergent consequence of the operations carried out by the biological machinery of the brain. That hypothesis has withstood every test so far. In principle, if we can understand those operations and implement them, then that new implementation will again produce the mind and conscious awareness.
The principal operators in the brain are called neurons. Those tiny processors know nothing except that incoming excitation or inhibition changes their membrane potential. At some threshold they respond with an electric discharge of their own. Together, the orchestration of billions of neurons is the information processor that plays the symphony that is our experience of being.
Uploading a mind involves recording enough data about a person’s working brain to replicate its cognitive functions mathematically, then to implement those mathematical functions in another device that will produce the same mind when it is active. Because you can then move a mind from brain to brain (device), we say you have achieved substrate-independence. The neural engineering used to do that is called whole brain emulation.
The biggest challenge is to access the brain’s relevant data. In neural engineering today, the first steps towards whole brain emulation are efforts to build neural prostheses – replacement parts for small parts of the brain. Examples are retinal prostheses and the ambitious hippocampal neural prosthesis project at the Berger Lab of the University of Southern California, which should enable patients with a malfunctioning hippocampus to regain the ability to create new memories. If you can replace each part of the brain with an equivalent neural prosthetic device that is in essence the same as whole brain emulation. At a later stage, when we know how to recover dynamic function from 3D structure scans as well, there may be wholesale methods for whole brain emulation from such scans, yet another path to mind uploading.
Yes, at Carboncopies we think it’s very important that we do. It’s already pretty easy to see why medical neural prostheses are useful and desirable to cure a patient’s brain dysfunction. Beyond that, neural prosthesis holds the promise of enhanced abilities. Imagine, for example, that you can explicitly choose which things to remember and which ones to forget when you have a hippocampal neural prosthesis. It’s also pretty easy to see why mapping and modelling brain functions is important to science, medicine, and to learn what could be implemented in artificial intelligence.
When our skills at building neural prosthesis reach the point where whole brain emulation is possible we reach a very special milestone. Up to that point, the need to interact with the remaining biological parts of a brain mean that there are hard limits to the sort of cognitive functions that are possible. For example, biological neurons will never be able to react fast enough to be aware of or to respond to events that happen at the microsecond scale, a dynamic part of our universe that only our machines can presently experience. Overcoming these and other limitations is the “human thriving” argument for mind uploading. It means that we gain the choice to expand our range of possible experience and capabilities, to participate in more, instead of ceding the bigger picture to our machines as we remain constrained to a narrow subset of what the universe has to offer.
The is also an important “survival argument” for mind uploading. If we cannot modify our mental abilities then we are constrained to an evolutionary niche. If the history of evolution has shown anything, it has shown that those niches tend to disappear. Present developments, for example in artificial intelligence, suggest that human thought might soon play an ever-decreasing and minor role in the future society of intelligences. Adapting to change may well be a survival requirement.
What should we call ourselves?
I can’t say that I’ve ever thought of a human who uploads as anything other than human. When a person has prosthetic limbs or a cochlear implant we don’t call them anything other than human. So, I imagine that we can still call ourselves human, even if we had prosthetic bodies. If anything, augmenting our abilities through technology has always been a uniquely human characteristic.
Miguel A. L. Nicolelis
Professor of Neurobiology, Biomedical Engineering and Psychology, and Neuroscience, Duke University
No, because our minds are not digital at all. It depends on information embedded in the brain tissue that cannot be extracted by digital means.
It will never happen. This is just an urban scifi myth that has no scientific merit or backing. It only diminishes the unique nature of our human condition — by comparing it to digital machines — and instills fear on people who do not know better.