Skip to main content

Malicious Google Assistant commands can be hidden in music and innocuous-sounding speech recordings

A group of students from Berkeley have demonstrated how malicious commands to Google Assistant, Alexa, and Siri can be hidden in recorded music or innocuous-sounding speech.

Simply playing the tracks over the radio, streaming music track or podcast could allow attackers to take control of a smart home …

The NY Times reports that it builds on research that began in 2016.

Over the past two years, researchers in China and the United States have begun demonstrating that they can send hidden commands that are undetectable to the human ear to Apple’s Siri, Amazon’s Alexa and Google’s Assistant. Inside university labs, the researchers have been able to secretly activate the artificial intelligence systems on smartphones and smart speakers, making them dial phone numbers or open websites. In the wrong hands, the technology could be used to unlock doors, wire money or buy stuff online — simply with music playing over the radio.

The 2016 research demonstrated commands hidden in white noise, but the students have this month managed to do the same thing in music and spoken text.

By making slight changes to audio files, researchers were able to cancel out the sound that the speech recognition system was supposed to hear and replace it with a sound that would be transcribed differently by machines while being nearly undetectable to the human ear […]

They were able to hide the command, “O.K. Google, browse to evil.com” in a recording of the spoken phrase, “Without the data set, the article is useless.” Humans cannot discern the command.

The Berkeley group also embedded the command in music files, including a four-second clip from Verdi’s “Requiem.”

Similar techniques have been demonstrated using ultrasonic frequencies.

Researchers at Princeton University and China’s Zhejiang University demonstrated that voice-recognition systems could be activated by using frequencies inaudible to the human ear. The attack first muted the phone so the owner wouldn’t hear the system’s responses, either.

The Berkeley researchers say that there is no indication of the attack method being used in the wild, but that could easily change.

Nicholas Carlini, a fifth-year Ph.D. student in computer security at U.C. Berkeley and one of the paper’s authors, [said that] while there was no evidence that these techniques have left the lab, it may only be a matter of time before someone starts exploiting them. “My assumption is that the malicious people already employ people to do what I do,” he said.


Check out 9to5Google on YouTube for more news:

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Google — experts who break news about Google and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Google on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Manage push notifications

notification icon
We would like to show you notifications for the latest news and updates.
notification icon
You are subscribed to notifications
notification icon
We would like to show you notifications for the latest news and updates.
notification icon
You are subscribed to notifications