Alexa And Siri AI Assistants Vulnerable To Secret Commands That Humans Can’t Hear

Amazon Echo

For many people, digital assistants have become a party of daily life—you might have Siri set a reminder for an upcoming appointment on your iPhone, or tell Alexa to order more laundry detergent from Amazon. These AI (artificial intelligence) assistants are becoming more capable by the day as well. But can they be used for nefarious purposes? Perhaps so, and it could happen right under your nose—over the past couple of years, researchers have demonstrated that Apple's Siri, Amazon's Alexa, and Google's Assistant can each receive 'hidden' commands that are undetectable to the human ear.

Some of the things that can be done without an owner's knowledge are seemingly pretty mundane, like dialing a phone number or visiting a website. But those things open the door to bigger risks. An attacker could send an undetectable signal to have a smartphone visit a malicious website and download malware, or wire money to someone. And with greater integration with Internet of Things (IoT) devices and smart home technology, it would even be possible to unlock a door.


These commands can be hidden in white noise played over loudspeakers or YouTube videos, as students from the University of California, Berkeley, and Georgetown University demonstrated two years ago. More recently, researchers at Berkeley said they could hide commands into music recordings and spoken text. That means if you fire up the wrong playlist—a malicious one—it may sound like everything is normal, but in reality someone is attacking your phone or smart speaker.

One of the researchers said the goal was to see if the could "make it even more stealthy," and they accomplished that. The good news is that for now, it all exists in a lab. However, it's probably only a matter of time before these types of attacks trickle out into the wild—if students at universities are working on this sort of thing, it's likely that bad actors are doing the same.

As the number of devices with AI assistants grows, it's important for companies to build in protections against this sort of thing. Some are already in place. On Apple's HomePod, for example, there are measures in place to prevent commands from unlocking doors. And if you own an iPhone, it must be unolcked before Siri will open an app or tap into personal data.

Even so, we've already seen some mild cases of devices being exploited. For example, Burger King ran a commercial last year that intentionally engaged with Google Home smart speakers by using the wake phrase "Okay Google." The incident prompted Google to issue an update to stop that sort of thing, and while not exactly a malicious attack, it underscores that AI assistants are susceptible to outside influences.