Are you a postdoctoral researcher with a hankering to help the US government hone its brain-warfare skills? Well, Uncle Sam has just the job for you!
The Office of the Director of National Intelligence (ODNI) recently posted an opportunity online for a postdoctorate fellow who will “examine how BNNs work and determine if they can be manipulated”. For those of you unfamiliar with neuroscience acronyms, BNN stands for “biological neural network” – the nervous system of a living creature.
The fellowship was posted by the National Intelligence Council on Zintellect, a job-listing site with openings available through ORAU, a consortium of universities with postdoctoral programs focused on research and education in the scientific community.
Specifically, the gig entails building a biological neural network that is trained to classify images and then trying to fool it to misclassify the images. In other words, building a brain and figuring out ways to manipulate it. The listing includes two possible ways to manufacture a biological neural network: “in vitro with neuron cell cultures or slices” or “with detailed physiological models of BNNs”. It does note that, in this particular program at least, human testing isn’t authorised.
“Are the principles of adversarial attacks on BNNs different from those on [artificial neural networks]?” the fellowship description states. “Can BNNs be made more robust to such attacks? Known adjacent phenomena are optical illusions, confirmation bias, and (more obtusely) camouflage.”
With artificial neural networks, or ANNs, an adversarial attack is when an attacker intentionally causes a machine-learning model to make a mistake. This fellowship description is asking someone to find out whether living creatures are vulnerable to similar adversarial attacks. But you can’t apply the same type of attack on a biological neural network (read: A living creature) that you can on a machine. It isn’t a replicable model. An adversarial attack on an ANN involves adding a subtle but disruptive deviation that throws off the machine learning model. Because ANN’s inputs and outputs are numeric, while a BNN consists of living cells, the two networks don’t process images in the same way and thus cannot be manipulated to misclassify them using the same techniques.
It makes sense that the US intelligence community is trying to figure out how BNN manipulation works. Zachary Chase Lipton, an assistant professor at Carnegie Mellon University, told Gizmodo that adversarial attacks against computer vision systems are a clear intelligence or security interest. He gave the example of using facial recognition to control access to places or devices, noting that “understanding the vulnerability of those systems is essential”. If technology existed that allowed someone to fool and gain access to those systems, the intelligence community likely wants to have access to that tech before “some hostile country did”.
If someone can figure out how to construct, train and fool a brain, knowledge of those vulnerabilities could prove invaluable to the intelligence community. It doesn’t require mental gymnastics to understand why ODNI, which oversees 17 US intelligence agencies, might want to learn how to subtly exploit living creatures. In addition to widespread surveillance around the world, the US intelligence community deals with cybersecurity threats, weapons of mass destruction, and counterterrorism.
In a statement emailed to Gizmodo, an ODNI spokesperson explained that the Postdoctoral Research Fellowship Program “provide[s] fellows an opportunity to use their expert knowledge to perform research on a specific topic for two-three years”.
“The person awarded an IC Postdoctoral Research Fellowship is not considered an employee of the US government, and there is no continued employment,” the spokesperson explained. “The opportunity you are referring to, as well as all the other IC Postdoc Research Fellowship opportunities listed on Zintellect, were created by government scientists to inspire new, different, and innovative approaches to unclassified IC research.”
Lipton doesn’t see this as a job for just any postdoctoral fellow. He pointed out that “billions of dollars” have been spent on research on how to simulate brains and that we are currently “at the level of creating the simulations of worm brains”. An artificial BNN that comes anywhere close to a human brain is likely still a long, long way off.
“This isn’t a first step,” Lipton said of the task at hand. “This is a Nobel Prize.”