Security and Privacy Problems in Voice Assistant Applications: A Survey
- URL: http://arxiv.org/abs/2304.09486v1
- Date: Wed, 19 Apr 2023 08:17:01 GMT
- Title: Security and Privacy Problems in Voice Assistant Applications: A Survey
- Authors: Jingjin Li, Chao chen, Lei Pan, Mostafa Rahimi Azghadi, Hossein
Ghodosi, Jun Zhang
- Abstract summary: Security and privacy threats have emerged with the rapid development of the Internet of Things (IoT)
The security issues researched include attack techniques toward machine learning models and other hardware components widely used in voice assistant applications.
This paper concludes and assesses five kinds of security attacks and three types of privacy threats in the papers published in the top-tier conferences of cyber security and voice domain.
- Score: 10.10499765108625
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Voice assistant applications have become omniscient nowadays. Two models that
provide the two most important functions for real-life applications (i.e.,
Google Home, Amazon Alexa, Siri, etc.) are Automatic Speech Recognition (ASR)
models and Speaker Identification (SI) models. According to recent studies,
security and privacy threats have also emerged with the rapid development of
the Internet of Things (IoT). The security issues researched include attack
techniques toward machine learning models and other hardware components widely
used in voice assistant applications. The privacy issues include technical-wise
information stealing and policy-wise privacy breaches. The voice assistant
application takes a steadily growing market share every year, but their privacy
and security issues never stopped causing huge economic losses and endangering
users' personal sensitive information. Thus, it is important to have a
comprehensive survey to outline the categorization of the current research
regarding the security and privacy problems of voice assistant applications.
This paper concludes and assesses five kinds of security attacks and three
types of privacy threats in the papers published in the top-tier conferences of
cyber security and voice domain.
Related papers
- Privacy-preserving Optics for Enhancing Protection in Face De-identification [60.110274007388135]
We propose a hardware-level face de-identification method to solve this vulnerability.
We also propose an anonymization framework that generates a new face using the privacy-preserving image, face heatmap, and a reference face image from a public dataset as input.
arXiv Detail & Related papers (2024-03-31T19:28:04Z) - Information Security and Privacy in the Digital World: Some Selected Topics [1.3592237162158234]
New challenges are faced in identifying spurious and fake information and protecting the privacy of sensitive data.
This book presents some of the state-of-the-art research works in the field of cryptography and security in computing and communications.
arXiv Detail & Related papers (2024-03-30T03:52:58Z) - Evaluating the Security and Privacy Risk Postures of Virtual Assistants [3.1943453294492543]
We evaluated the security and privacy postures of eight widely used voice assistants: Alexa, Braina, Cortana, Google Assistant, Kalliope, Mycroft, Hound, and Extreme.
Results revealed that these VAs are vulnerable to a range of security threats.
These vulnerabilities could allow malicious actors to gain unauthorized access to users' personal information.
arXiv Detail & Related papers (2023-12-22T12:10:52Z) - Privacy-preserving and Privacy-attacking Approaches for Speech and Audio -- A Survey [7.88857172307008]
This paper aims to examine existing approaches for privacy-preserving and privacy-attacking strategies for audio and speech.
We classify the attack and defense scenarios into several categories and provide detailed analysis of each approach.
Our investigation reveals that voice-controlled devices based on neural networks are inherently susceptible to specific types of attacks.
arXiv Detail & Related papers (2023-09-26T17:31:35Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Privacy in Open Search: A Review of Challenges and Solutions [0.6445605125467572]
Information retrieval (IR) is prone to privacy threats, such as attacks and unintended disclosures of documents and search history.
This work aims at highlighting and discussing open challenges for privacy in the recent literature of IR, focusing on tasks featuring user-generated text data.
arXiv Detail & Related papers (2021-10-20T18:38:48Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z) - Speaker De-identification System using Autoencoders and Adversarial
Training [58.720142291102135]
We propose a speaker de-identification system based on adversarial training and autoencoders.
Experimental results show that combining adversarial learning and autoencoders increase the equal error rate of a speaker verification system.
arXiv Detail & Related papers (2020-11-09T19:22:05Z) - Stop Bugging Me! Evading Modern-Day Wiretapping Using Adversarial
Perturbations [47.32228513808444]
Mass surveillance systems for voice over IP (VoIP) conversations pose a great risk to privacy.
We present an adversarial-learning-based framework for privacy protection for VoIP conversations.
arXiv Detail & Related papers (2020-10-24T06:56:35Z) - More Than Privacy: Applying Differential Privacy in Key Areas of
Artificial Intelligence [62.3133247463974]
We show that differential privacy can do more than just privacy preservation in AI.
It can also be used to improve security, stabilize learning, build fair models, and impose composition in selected areas of AI.
arXiv Detail & Related papers (2020-08-05T03:07:36Z) - Measuring the Effectiveness of Privacy Policies for Voice Assistant
Applications [12.150750035659383]
We conduct the first large-scale data analytics to systematically measure the effectiveness of privacy policies provided by voice-app developers.
We analyzed 64,720 Amazon Alexa skills and 2,201 Google Assistant actions.
Our findings reveal a worrisome reality of privacy policies in two mainstream voice-app stores.
arXiv Detail & Related papers (2020-07-29T03:17:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.