Rapid progress in research involving miniature human brains grown in a dish has led to a host of ethical concerns, particularly when these human brain cells are transplanted into nonhuman animals. A new paper evaluates the potential risks of creating “humanised” animals, while providing a pathway for scientists to move forward in this important area.
Neuroscientist Isaac Chen from the Perelman School of Medicine at the University of Pennsylvania, along with his colleagues, has written a timely Perspective paper published today in the science journal Cell Stem Cell.
The paper was prompted by recent breakthroughs involving the transplantation of human brain organoids into rodents — a practice that’s led to concerns about the “humanisation” of lab animals.
In their paper, the authors evaluate the current limits of this biotechnology and the potential risks involved, while also looking ahead to the future. Chen and his colleagues don’t believe anything needs to be done right now to limit these sorts of experiments, but that could change once scientists start to enhance certain types of brain functions in chimeric animals, that is, animals endowed with human attributes, in this case human brain cells.
In the future, the authors said, scientists will need to be wary of inducing robust levels of consciousness in chimeric animals and even stand-alone brain organoids, similar to the sci-fi image of a conscious brain in a vat.
Human brain organoids are proving to be remarkably useful. Made from human stem cells, brain organoids are tiny clumps of neural cells which scientists can use in their research.
To be clear, pea-sized organoids are far too basic to induce traits like consciousness, feelings, or any semblance of awareness, but because they consist of living human brain cells, scientists can use them to study brain development, cognitive disorders, and the way certain diseases affect the brain, among other things. And in fact, during the opening stages of the Zika outbreak, brain organoids were used to study how the virus infiltrates brain cells.
The use of brain organoids in this way is largely uncontroversial, but recent research involving the transplantation of human brain cells into rodent brains is leading to some serious ethical concerns, specifically the claim that scientists are creating part-human animals.
Anders Sandberg, a researcher at the University of Oxford’s Future of Humanity Institute, said scientists are not yet able to generate full-sized brains due to the lack of blood vessels, supporting structure, and other elements required to build a fully functioning brain. But that’s where lab animals can come in handy.
“Making organoids of human brain cells is obviously interesting both for regenerating brain damage and for research,” explained Sandberg, who’s not affiliated with the new paper. “They do gain some structure, even though it is not like a full brain or even part of a brain. One way of getting around the problem of the lack of blood vessels in a petri dish is to implant them in an animal,” he said. “But it’s at this point when people start to get a bit nervous.”
The concern, of course, is that the human neural cells, when transplanted into a nonhuman animal, say a mouse or rat, will somehow endow the creature with human-like traits, such as greater intelligence, more complex emotions, and so on. The question emerges as to whether such an animal deserves higher moral consideration, and by consequence, a different set of rules in terms of what’s permissible in an experimental setting.
Research done in 2013, for example, highlighted the potential for human brain cells to affect the behaviour and capacities of nonhuman animals. In experiments, neuroscientists replaced around half of the mice brain with human cells, mostly glia, during development. As my coverage noted at the time, this intervention caused enhancements of the rodent’s cognitive capacities, including “augmentations to memory, learning, and adaptive conditioning.”
To be clear, this tweak made the mice smarter, but smarter by mouse standards. In terms of their overall intelligence, the mice were still light-years removed from human-levels of intelligence. By the same token, and as the authors wrote in their paper, the “likelihood that current iterations of organoids or animals transplanted with these organoids can develop more complex cognitive abilities is minute.”
Furthermore, the authors say it’s not useful to use terms like “humanised” or “humanisation” to describe these animals or the processes involved, as these descriptions aren’t useful from a biological point of view. In this excellently argued excerpt from the new Perspective, the authors make their case:
What does it mean for an animal transplanted with a brain organoid to become more “human”? A common reply is that the chimera has assumed more “human-like” characteristics, such as self-awareness, advanced cognitive capacities, and complex emotions. However, these traits may not be unique to human beings, a notion that has been discussed with respect to a range of non-human species and artificial intelligence. An alternative approach often taken in the ethical literature on chimeras is to consider whether these entities have attained moral equivalence to human beings.
Several theories of moral status have been debated. One argument is that individuals should be accorded respect simply because of their membership in the human race. Extending this reasoning to human tissue, cells, and genes leads to the conclusion that a chimera harbouring any human components would automatically have its moral status elevated. Most scholars have rejected this line of reasoning because it does not hold up to biological scrutiny and is based on species-centric bias with no other logical basis.
A more logically consistent argument for moral status is based on the premise that entities that are capable of making rational, conscious choices possess intrinsic moral value. Achieving this ability in a chimeric animal is a very high bar. Making rational, conscious choices may require the use of language to enable meta-cognition (i.e., thinking about thinking) and awareness of one’s own mental states. Moreover, these abilities require years of social and educational nurturing to develop, even in humans.
Therefore, discussions of the moral equivalency of “extreme chimeras”, self-aware animals with rational thought, may be less germane to the immediate issue of brain organoid transplantation.
Personally, I believe Chen and his co-authors are spot-on with this assessment (I also love that they brought up AI in this context, as I’ve argued along similar lines).
Indeed, the claim of “humanisation” smacks of human exceptionalism and speciesism. What would be more helpful, as the authors argue, is the identification of actual biological traits and functions that could potentially raise the moral value of a chimeric nonhuman animal (assuming, of course, that one doesn’t already consider animals to have the highest moral worth and believe that animals shouldn’t be experimented upon in any case).
To that end, Chen and his co-authors say scientists should be on the look-out for “augmentations of discrete brain functions” in chimeric animals that would raise their moral status. When asked which specific traits would warrant concern, Chen said there’s no answer to this question at the moment.
“It is something that requires additional thought and discussion in the scientific community working closely with bioethicists and other stakeholders,” Chen wrote to Gizmodo. “As we discuss in the paper, it is likely the case that cognitive function is in a higher tier requiring more discussion than basic neurological functions like sensation and vision.”
Those “higher tiers” can include aspects associated with many types of cognitive functions, including attention and memory. As the authors explain, however, the enhancement of “cerebral function in brain organoid chimeras is currently not feasible, and the degree to which an animal’s brain can be enhanced, even if it were to be completely replaced with human neurons, has limits.”
Indeed, it’s important to point out that scientists are very far from being able to evoke these higher-tier characteristics in, for example, a rodent brain.
Orly Reiner, a molecular geneticist at the Weizmann Institute of Science in Israel, explained it to Gizmodo this way: “these technologies cannot offer human-cell driven brain activity over-riding host-based activity.” In other words, the presence of human cells in a nonhuman animal’s brain doesn’t just suddenly make it a human brain — it’s not like scientists have suddenly created a human mind trapped inside an animal’s body.
At the same time, however, Reiner, who wasn’t involved with the new paper, said a “crucial time point” will occur once “these barriers are crossed.” It’s “important to embrace new technologies,” she said, but only when done in parallel with the promotion of “public and ethical discussions to help in defining risks and adding precautions,” Reiner wrote to Gizmodo.
A concern expressed in the Perspective paper was the potential for an elevated level of consciousness to arise in chimeric animals. Most scientists agree that virtually all animals exhibit signs of being conscious, and the 2012 Cambridge Declaration on Consciousness basically says as much, but Chen said there’s an important clinical distinction to be made between an animal’s level of arousal and the content of its awareness.
“Animals and humans that are healthy usually have normal levels of arousal, [meaning] how awake they are,” he explained to Gizmodo.
“The content of consciousness describes an organism’s perception of their internal and external environments and often what most people refer to as ‘consciousness.’ However, the content of consciousness is not binary [i.e. it’s either present or absent] and [it] covers a range of such perceptions, which are different among different animal species. In our paper, we discuss specifically qualities such as self-awareness and rational decision-making, which are at the high end of the range of the content of consciousness,” wrote Chen.
Accordingly, Chen and his co-authors are concerned that in the future, these biotechnologies might confer a more profound or robust level of self-awareness and sentience in a chimeric animal. The “prospect of enhanced brain chimeras is likely still off in the future,” said Chen, but “a chimeric animal that developed evidence of self-awareness and rational decision-making — again, very unlikely at this time — would warrant a pause in the research and a broader discussion across society about the direction of this research.”
To which he added: “I think brain organoid generation and transplantation have enormous potential as models of brain development and brain diseases and as possible substrates for repairing damaged brain circuitry. I think there are hazards in carrying out any form of this research without thinking about the potential outcomes, developing methods for monitoring for these outcomes, and reviewing the broader ramifications of these outcomes.”
Biologist Alysson Muotri from UC San Diego School of Medicine said he doesn’t have a problem with “translating” human brain organoids inside the brains of mice. For Muotri, a much more important issue is the speculative prospect of creating a sentient, self-aware human organoid outside the body — a concern shared by Chen and his colleagues.
“To me, that’s where the ethical concerns will become more evident and [will] need further discussions,” Muotri, who’s not affiliated with the Perspective paper, told Gizmodo. “The potential generation of a ‘self-aware’ human brain organoid in a vat might create an ethical dilemma. If this is possible, we will need to address the moral status of these organoids. Similar to human subjects in clinical trials or animals for research, the moral status will dictate how and when to use human brain organoids in future research,” he said.
Arnold Kriegstein, a professor of stem cell and tissue biology at UC San Francisco, also not involved with the Perspective, agrees with the authors that current methods using human brain organoids aren’t resulting in profound cognitive changes.
“But, the authors wisely raise the possibility that certain chimeras could potentially enhance specific brain functions,” Kriegstein told Gizmodo in an email. “This concern has already been realised by the creation of mice…[that] perform better than native mice on a battery of learning tests,” in reference to the aforementioned study from 2013.
“Thus, the potential of chimeras to manifest enhancement of brain function is a near-term concern, while sentience or self-awareness is a distant future possibility. Nonetheless, as thoughtfully articulated here, the potential behavioural consequences of chimerisation should be a concern of all scientists creating human-animal brain chimeras.”
Reiner said now is a good time to advance this conversation, as the technology is developing rapidly. A “discussion is needed to set the grounds for future advances that will require setting specific frames for the use of these new technologies so that they will be based on concrete data and not ‘fake news,’” she said.
Sandberg thought the arguments posed in the paper were “reasonable,” but he’s more concerned with another issue: the potential for suffering.
“Can organoids suffer?” he asked. “This is a tough question. A random ball of neurons with no senses do not seem likely to have an inner world. Yet we can get pain from nerve damage that causes the wrong cross-connections — but that may still require a conscious brain to interpret the random signals as pain. So while I think an organoid in a petri dish is less likely to suffer than a non-random nervous system, when implanting it into a brain it might actually make the brain experience bad things.”
Sandberg said experimenters need to be aware of this possibility and assess whether their chimeric animals might be in distress. He said “odd states” might exist in which the “potentially exotic wiring of these systems might be harder to notice than regular pain.” Ultimately, “some caution is needed,” according to Sandberg. “It does not mean we should stop researching, but we may need to look carefully.”
Sandberg brought up a quote from the English philosopher Jeremy Bentham, who famously said, “The question is not, Can they reason? nor, Can they talk? but, Can they suffer?”