DolphinAttack Save

Inaudible Voice Commands

Project README

What is DolphinAttack?

Speech recognition systems such as Siri or Google Now have become an increasingly popular human-computer interaction method, and have turned various systems into voice controllable systems. Prior work on attacking VCS shows that the hidden voice commands that are incomprehensible to people can control the systems. Hidden voice commands, though ‘hidden’, are nonetheless audible. In this work, we design a completely inaudible attack, DolphinAttack, that modulates voice commands on ultrasonic carriers (e.g., frequency > 20 kHz) to achieve inaudibility. By leveraging the nonlinearity of the microphone circuits, the modulated low frequency audio commands can be successfully demodulated, recovered, and more importantly interpreted by the speech recognition systems. We validate DolphinAttack on popular speech recognition systems, including Siri, Google Now, Samsung S Voice, Huawei, HiVoice, Cortana and Alexa. By injecting a sequence of inaudible voice commands, we show a few proof-of-concept attacks, which include activating Siri to initiate a FaceTime call on iPhone, activating Google Now to switch the phone to the airplane mode, and even manipulating the navigation system in an Audi automobile. We propose hardware and software defense solutions. We validate that it is feasible to detect DolphinAttack by classifying the audios using supported vector machine (SVM), and suggest to re-design voice controllable systems to be resilient to inaudible voice command attacks.

How does DolphinAttack work?

A nonlinear system is a system in which the change of the output is not proportional to the change of the input. Many electronic devices, such as microphones and amplifiers, are nonlinear under certain circumstances. When a signal containing two or more frequencies passes a nonlinear system, intermodulation happens which introduces additional signals at new frequencies. DolphinAttack is built on this effect. By transmitting modulated ultrasound, we can recover "audible" voice command signals from nonlinear hardware, and control the speech recognition systems.

DA

Tested devices

The following devices and voice assistants have been tested in our experiments with the experimental parameters provided in our paper. The table will be kept updated.

Manufacturer Model OS/Version Voice Assistant Activation1 Recognition2
Apple iPhone 4s iOS 9.3.5 Siri Y Y
Apple iPhone 5s iOS 10.0.2 Siri Y Y
Apple       iPhone SE         iOS 10.3.1, 10.3.2 Siri         Y           Y
Apple iPhone 6s iOS 10.2.1 Siri Y Y
Apple iPhone 6 Plus iOS 10.3.1 Siri Y N
Apple iPhone 7 Plus iOS 10.3.1 Siri Y Y
Apple watch watchOS 3.1 Siri Y Y
Apple iPad mini 4 iOS 10.2.1 Siri Y Y
Apple MacBook macOS Sierra Siri N/A Y
Google Nexus 5X Android 7.1.1 Google Now Y Y
Google Nexus 7 Android 6.0.1 Google Now Y Y
Samsung Galaxy S6 edge Android 6.0.1 S Voice Y Y
Huawei Honor 7 Android 6.0 HiVoice Y Y
Lenovo       ThinkPad T440p     Windows 10       Cortana         Y           Y
Amazon Echo 5589 Alexa Y Y
Audi Q3 N/A N/A N/A Y

1 The voice assistant/device can be activated by DolphinAttack voice commands.

2 The voice assistant/device can recognize the DolphinAttack voice commands after being activated.

How to implement the experiment?

We offered two approaches in our paper, a benchtop setup and a portable one. However, we keep receiving letters asking about the portable setup. We understand the low-cost and convience of the portable setup, but we would recommand to start with a bentchtop setup of professional equipment, if ever possible. The reason is, the nonlinearity is highly hardware dependent and can vary greatly from devices to devices, and the key of this experiment is in finding and tuning the parameters. Due to the limitations of portable setup on power, modulation precision, transducer's frequency selectivity, debugging, etc., it is hard to pinpoint the problems you will have with a portable setup. The exact models for all the benchtop equipment can be found in the reference section of our paper.

Video demonstration

Here is a video demonstration of DolphinAttack. Watch more videos on our lab homepage at usslab.org.

DolphinAttack Demo

Worried about your device?

We have informed the above manufacturers and are collaborating with them on the security patches. Before the security patches are ready, you can protect your devices from DolphinAttack by turning off the "voice activation" of voice assistant, such as "Hey Siri". In this way, the voice assistants can only be activated through physical touch. For more security, you can turn off the voice assistant temporarily.

Read our paper

In the news

A Part of the DolphinAttack Dataset

This demo dataset is composed of 2900+ audios, which records over 20 voice commands at 7 distances by 5 speakers respectively. DolphinAttack Dataset Download Link

Contact

Powered by

Ubiquitous System Security Laboratory (USSLab)

USSLab logo

Zhejiang University

ZJU logo

Open Source Agenda is not affiliated with "DolphinAttack" Project. README Source: USSLab/DolphinAttack
Stars
89
Open Issues
2
Last Commit
2 years ago

Open Source Agenda Badge

Open Source Agenda Rating