Alexa vs Alexa

Controlling Smart Speakers by Self-Issuing Voice Commands

AvA

How it Works

Alexa versus Alexa (AvA) is the offensive act of self-issuing arbitrary voice commands on an Echo device, that is, using the device speaker to issue commands to the device itself. Using AvA, an attacker can control victim Echo devices leveraging common audio reproduction methods, such as a radio station that acts like a C&C server, or making Echo Dot act as a speaker for a nearby device via Bluetooth. AvA starts when the Echo device connects to one of these attack vectors. From time to time, the chosen attack vector streams over the Echo device voice commands that exploit the possibility to self-trigger it. These commands are chosen by the attacker and can be generated via any Text-to-Speech solution, or by any solution capable of generating adversarial commands which work against Alexa over-the-air, although with a lower success rate.

The AvA Exploitation Flow.

Demo

AvA in Action

The video shows AvA exploiting the self-issue vulnerability to give the "Echo, what time is it?" command generated with Google TTS, by means of a nearby device connected via Bluetooth to the 3rd Generation Echo Dot. Notice how the volume turns down when Echo recognizes the wakeword. Subsequently, the video shows the exploit of a vulnerability, which we call the Full Volume Vulnerability (FVV), by self-issuing the "Echo, turn off" command. Immediately after, the attacker issues the longer command "Echo, what is the weather like in New York?". Notice how the volume is not turned down anymore, because the attacker exploited the FVV.

Responsible Disclosure

Complete Timeline of the Responsible Disclosure Process

March 2020 Research on AvA starts!

21st January 2021 We start the responsible disclosure process by reporting all found undesirable behaviours and potential vulnerabilities to Amazon, via their Vulnerability Research Program. Our report includes the self-issue vulnerability, the Full Volume vulnerability and the possibility to chain multiple break SSML tags within a skill response, a behaviour that could lead to realistic VMA scenarios.

2nd February 2021 First response from Amazon.

4th February 2021 After a first review of the report, Amazon does not object to our decision to submit our research paper to venues for publication.

18th February 2021 Our research team engages in a videoconference with Amazon to further explain details of the found vulnerabilities.

8th April 2021 Our report is assigned Medium severity by the Amazon Team.

18th October 2021 AvA is accepted to the ASIACCS 2022 conference!

19th October 2021 Our research team contacts Amazon to inform them that our research paper will appear within the conference's proceedings. We also ask for permission to publish a pre-print version of the paper, publish this website and have a press release.

20th October 2021 Amazon requests disclosure materials for review prior to publishing ahead of conference, in line with responsible disclosure.

21st January 2022 Amazon requests edits to disclosure materials based on facts of the potential issue.

29th January 2022 Agreed deadline for the disclosure.

17th February 2022 Vulnerability is disclosed via publication of a pre-print paper on ArXiv.

23rd February 2022 The self-issue vulnerability on Amazon Echo devices gets a CVE Entry.

28th February 2022 Video demonstration of AvA is uploaded on Youtube. This website goes live.

Paper

Alexa vs Alexa: Controlling Smart Speakers by Self-Issuing Voice Commands

AvA was reported by Sergio Esposito (Royal Holloway University of London), Daniele Sgandurra (Former Royal Holloway University of London) and Giampaolo Bella (Università degli Studi di Catania). The paper will be published on the 17th ACM ASIA Conference on Computer and Communications Security (ACM ASIACCS 2022)'s proceedings.

Sergio Esposito

PhD Student at Royal Holloway University of London

Daniele Sgandurra

Former Royal Holloway University of London

Giampaolo Bella

Associate Professor (with Italian MIUR habilitation as Full Professor) at Università degli Studi di Catania

Q&A

Questions and Answers

Most likely. We have been able to confirm that AvA is a threat for 3rd and 4th generation Echo Dot devices. The remote self-wake issue via skills is no longer possible in the manner demonstrated by the research, as Amazon has fixed the issue. It is still possible to self-issue commands via Bluetooth.

Yes. If someone has used AvA in the past against your Echo device, issued commands are stored in the command history that can be found within the Alexa companion app. You should look for commands you did not issue yourself, although if your Echo device is used by multiple users, it can be harder to distinguish between legitimate commands and malicious ones. Fortunately, commands issued by AvA emit hearable sounds. If your Echo device is emitting suspicious sounds or you hear voice commands coming from your Echo device, it is likely that you are under attack.

Restart your Echo device. This terminates the AvA attack since it disconnects Echo from the attack vector (a malicious radio station or a nearby device connected via Bluetooth). However, since the Bluetooth pairing process does not need to be repeated after the attacker gets the initial connection, if the attacker is using that attack vector and they are still nearby, they can reconnect and resume the attack. Hence, resetting the Echo device to factory settings is the safest way to ensure your Echo device gets permanently disconnected from the attack vector.

Mute your Echo device's microphone during the night, or when you are not around Echo. This makes impossible to self-issue any command. Additionally, if the microphone is unmuted only when you are near Echo, you will be able to hear the self-issued commands, hence being able to timely react to them (powering off Echo, cancelling an order that the attacker has placed with your Amazon account, e.g.). You can always exit skills by saying "Alexa, quit" or "Alexa, cancel". Additionally, you can opt to enable an audible indicator that is played after the Echo device detects the wake word.

They could potentially control your Echo device and all connected smart appliances. The attacker could set alarms, unlock door locks, make phone calls, turn off lights, and much more. They could even hijack your voice commands towards a malicious skill, which pretends to be Alexa or any other skill you wanted to call (namely, they can do a Voice Masquerading Attack). To mitigate these risks, you can set up voice PINs for sensitive Alexa operations, including making purchases, smart home requests like unlocking doors, and certain skills (e.g., banking or health).

We are not aware of the vulnerability being exploited in the wild.

The remote self-wake issue via skills and the break SSML tag chain are no longer possible in the manner demonstrated by the research, as Amazon has fixed them. We are still in touch with Amazon to provide as much information as we have.

Logo art by Giulia Coco