Skip to main content

Smart speakers are vulnerable to a variety of attacks

Google Home
Image Credit: Khari Johnson

For the most part, AI-powered smart speakers like Google Home, Amazon’s Echo, and Apple’s HomePod are relatively innocuous. They stream music and internet radio, highlight upcoming calendar events, place takeout orders, provide up-to-date weather forecasts, and more. But as this month’s incident involving an Alexa speaker illustrated, they’re not perfect, and their imperfections make them vulnerable to outside attacks.

As voice assistants approach the point of ubiquity, it’s important to keep in mind that they, just like any software, can be exploited for all sorts of nefarious purposes. Here are some of the attacks uncovered by security researchers within the past year.

Bear in mind that these attacks can’t be executed remotely. Physical access to the target smart speaker is a prerequisite of at least one of the exploits, and several attacks hinge on local Wi-Fi or Bluetooth connectivity. Voice attacks executed through malicious apps, meanwhile, assume that the apps slip through Google and Amazon’s voice app approval processes. (Malicious voice apps, as with malicious smartphone apps, fall under the purview of app store moderators.)

Voice attacks

In a paper published in early May, researchers from Indiana University, the Chinese Academy of Sciences, and the University of Virginia identified two techniques that could be used to manipulate users into sharing private data with malicious apps.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

The first, “voice squatting,” piggybacks on voice commands intended to trigger third-party skills or apps. A bad actor could duplicitously name an app a homophone of a legitimate app (like Capital Won as opposed to Capital One), or tack similar-sounding words onto the ends of app names. And by appending a word that’s pronounced like “please” to an invocation, hackers could use Amazon’s Magic Word feature, which offers positive reinforcement when kids use the word “please” while asking Alexa questions, to launch a malicious app (e.g., Capital One Please).

The second technique, “voice masquerading,” preys on people with misconceptions about how voice assistants work. The researchers identified two ways in which an attacker could fool users into thinking they’ve switched or closed an app: “in-communication skill switch” and “faking termination.” In an in-communication skill switch attack, the ill-meaning app pretends to switch to another app after acknowledging a voice command. Apps that employ faking termination attacks, on the other hand, pretend to quit, exit, or self-terminate by sounding a reply such as “Goodbye!” to users while continuing to run silently in the background.

Supersonic commands

Impersonating legitimate apps isn’t the only way attackers can manipulate smart speakers. Just as effective are subsonic commands that are undetectable to the human ear.

Studies show that at least three major voice assistants — Alexa, Siri, and the Google Assistant — are susceptible to sonic messages embedded in YouTube videos, music, or even white noise. Attackers can use these messages to force voice assistants to dial phone numbers, make purchases, launch websites, access smart home accessories, take pictures, and send messages — without tipping off any nearby humans. Some of the commands can be transmitted from speakers up to 25 feet away, through a building’s window.

In an experiment conducted by Berkeley researchers, audio files were altered to “cancel out the sound that the speech recognition system was supposed to hear and replace it with a sound that would be transcribed differently by machines while being nearly undetectable to the human ear.” Studies at Princeton University and China’s Zhejiang University enhanced the attack by muting the voice assistants so that their responses would also be inaudible.

Apple, Amazon, and Google say they’ve implemented security measures that mitigate supersonic attacks but decline to say which specific attacks they protect against.

Software attacks

Smart speakers, like any device with an internet connection and system-on-chip, are vulnerable to software exploits.

In August 2017, Mark Barnes, a security researcher at MWR Info Security, demonstrated a physical attack on Amazon Echo speakers (which run a variant of Linux) that could allow hackers to gain access to the root shell (i.e., the administrative command line) of the underlying operating system. Once the malware is in place, it could grant attackers remote access to the speaker, allowing them to steal customer authentication tokens and stream live microphone data surreptitiously.

Other attacks can be conducted over the air. In November, security researchers at Armis reported that a collection of eight Bluetooth vulnerabilities, known as BlueBorne, could be used to commandeer smart speakers. Amazon Echo speakers were vulnerable to a remote code execution bug in the Linux kernel, and Google Home exposed identifying data as the result of a bug affecting Android’s Bluetooth implementation.

A related security flaw involves application programming interfaces (APIs), the intermediary layers that allow third-party apps to access software features. In January, a developer on Reddit began documenting APIs by intercepting requests from the Google Home smartphone companion app, some of which can be used to view connected Wi-Fi and Bluetooth networks, retrieve upcoming alarms, toggle on and off night mode, and rename the device. Malicious apps could use the APIs, which don’t require authentication, to interfere with a Google Home speaker’s settings without a user’s knowledge.

How to stay safe

Smart home speakers are on the upswing, it’s safe to say. Amazon and Google have sold tens of millions of units between them, and voice-enabled speakers are expected to reach 55 percent of U.S. households by 2022, according to Juniper Research. That’s why it’s more important than ever to ensure they remain safe from bad actors.

There’s no surefire way to protect against attacks, but there are two things you can do. First, just like with smartphones, update your smart speakers to the latest available firmware. Second, before installing new apps, check to make sure you aren’t getting malicious versions of legitimate apps.

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.