A team of computer science students has embedded subliminal audio signals into music, allowing them to secretly seize control of devices that respond to voice commands.
In a New York Times report published Thursday, a group of students from the University of California, Berkeley and Georgetown University detail the highly troubling research that lets them outwit smart speakers.
The team embedded subliminal commands, inaudible to human ears, into anodyne, white noise music. While a human hears nothing out of the ordinary, Siri or Alexa respond to the command. In the team's research, the hidden messages could be used to switch a device into aeroplane mode, open up web pages, or secretly add items to shopping lists.
"We want to demonstrate that it's possible," Nicholas Carlini, a UC Berkely PhD student and one of the authors of a research paper the team published this month, told the Times, "and then hope that other people will say, 'OK this is possible, now let's try and fix it.'"
While subliminal attacks can be dangerous - imagine the chaos of a smart home being fed confusing commands to lock or unlock doors or turn on lights - it's important to remember the AI fuelling Siri and Alexa isn't being "hacked", it's being fooled. Cybersecurity experts use the term "adversarial example" to refer to the potential dangers of tricking AI into erroneously recognising something.
While Carlini and his team's research can be troubling, particularly as people turn to voice assistant for running their homes, other researchers have demonstrated adversarial examples that include makeup or glasses to fool face recognition software.