With a laser, researchers say they can hack Alexa, Google Home or iPhones

Researchers in Japan and at the University of Michigan said they had found a way to take over Google Home, Amazon's Alexa or Apple's Siri devices remotely by shining laser pointers, and even flashlights, at the devices' microphones. PHOTOS: GOOGLE, REUTERS, NYTIMES

SAN FRANCISCO (NYTIMES, WASHINGTON POST) - Since voice-controlled digital assistants were introduced a few years ago, security experts have fretted that systems like Apple's Siri and Amazon's Alexa were a privacy threat and could be easily hacked.

But the risk presented by a cleverly pointed light was probably not on anyone's radar.

Researchers in Japan and at the University of Michigan said on Monday (Nov 4) they had found a way to take over Google Home, Amazon's Alexa or Apple's Siri devices from hundreds of feet away by shining laser pointers, and even flashlights, at the devices' microphones.

In one case, they said they opened a garage door by shining a laser beam at a voice assistant that was connected to it. They also climbed 43m to the top of a bell tower at the University of Michigan and successfully controlled a Google Home device on the fourth floor of an office building 70m away. And by focusing their lasers using a telephoto lens, they said, they were able to hijack a voice assistant more than 107m away.

Opening the garage door was easy, the researchers said. With the light commands, the researchers could have hijacked any digital smart systems attached to the voice-controlled assistants.

They said they could have easily switched light switches on and off, made online purchases or opened a front door protected by a smart lock. They even could have remotely unlocked or started a car that was connected to the device.

"This opens up an entirely new class of vulnerabilities," said Kevin Fu, an associate professor of electrical engineering and computer science at the University of Michigan. "It's difficult to know how many products are affected because this is so basic."

The computer science and electrical engineering researchers - Takeshi Sugawara at the University of Electro-Communications in Japan; and Fu, Daniel Genkin, Sara Rampazzi and Benjamin Cyr at the University of Michigan - released their findings in a paper on Monday.

Researchers spent seven months testing the trick on 17 voice-controlled devices enabled with Alexa, Siri, Facebook Portal and Google Assistant, including Google Home, Echo Dot, Fire Cube, Google Pixel, Samsung Galaxy, iPhone and iPad.

The researchers weren't sure exactly why these microphones respond to light as they do sound; they didn't want to speculate and are leaving the physics for future study.

The researchers said they notified Tesla, Ford, Amazon, Apple and Google to the light vulnerability. The companies all said they were studying the conclusions in the paper.

Spokesmen for Google and Amazon said the companies are reviewing the research and its implications for the security of their products but said risk to consumers seems limited.

An Amazon spokesman pointed out that customers could safeguard Alexa-enabled products with a PIN, or use the mute button to disconnect the microphone.

Other undetectable means of exploiting voice-command devices have been revealed by researchers, but their powers have been more limited. In 2016, researchers at the University of California at Berkeley showed it was possible to cloak commands in white noise, music or spoken text.

In 2017, researchers in China showed it was possible to give commands to smart devices at frequencies inaudible to the human ear, but a transmitter needs to be relatively close to the object for the method to work.

There are no known instances of someone using light commands to hack a device, researchers said, but eliminating the vulnerability would require a redesign for most microphones.

But there are limitations to the stealth of a light command attack, researchers found. With the exception of infrared lasers, lasers and other lights are visible to the naked eye and could easily be noticed by someone near the device.

Voice-command devices also generally give audible responses, but an attacker could still change the device's volume to continue operating it undetected.

For now, researchers say the only foolproof way to protect against light commands is to keep devices out of sight from windows, away from prying eyes - and prying laser beams.

Join ST's Telegram channel and get the latest breaking news delivered to you.