SoK: A Study of the Security on Voice Processing Systems
- URL: http://arxiv.org/abs/2112.13144v1
- Date: Fri, 24 Dec 2021 21:47:06 GMT
- Title: SoK: A Study of the Security on Voice Processing Systems
- Authors: Robert Chang, Logan Kuo, Arthur Liu, and Nader Sehatbakhsh
- Abstract summary: We will identify and classify an arrangement of unique attacks on voice processing systems.
The current and most frequently used machine learning systems and deep neural networks are at the core of modern voice processing systems.
- Score: 2.596028864336544
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As the use of Voice Processing Systems (VPS) continues to become more
prevalent in our daily lives through the increased reliance on applications
such as commercial voice recognition devices as well as major text-to-speech
software, the attacks on these systems are increasingly complex, varied, and
constantly evolving. With the use cases for VPS rapidly growing into new spaces
and purposes, the potential consequences regarding privacy are increasingly
more dangerous. In addition, the growing number and increased practicality of
over-the-air attacks have made system failures much more probable. In this
paper, we will identify and classify an arrangement of unique attacks on voice
processing systems. Over the years research has been moving from specialized,
untargeted attacks that result in the malfunction of systems and the denial of
services to more general, targeted attacks that can force an outcome controlled
by an adversary. The current and most frequently used machine learning systems
and deep neural networks, which are at the core of modern voice processing
systems, were built with a focus on performance and scalability rather than
security. Therefore, it is critical for us to reassess the developing voice
processing landscape and to identify the state of current attacks and defenses
so that we may suggest future developments and theoretical improvements.
Related papers
- Vulnerabilities in Machine Learning-Based Voice Disorder Detection Systems [3.4745231630177136]
We explore the possibility of attacks that can reverse classification and compromise their reliability.
Given the critical nature of personal health information, understanding which types of attacks are effective is a necessary first step toward improving the security of such systems.
Our findings identify the most effective attack strategies, underscoring the need to address these vulnerabilities in machine-learning systems used in the healthcare domain.
arXiv Detail & Related papers (2024-10-21T10:14:44Z) - Safeguarding Voice Privacy: Harnessing Near-Ultrasonic Interference To Protect Against Unauthorized Audio Recording [0.0]
This paper investigates the susceptibility of automatic speech recognition (ASR) algorithms to interference from near-ultrasonic noise.
We expose a critical vulnerability in the most common microphones used in modern voice-activated devices, which inadvertently demodulate near-ultrasonic frequencies into the audible spectrum.
Our findings highlight the need to develop robust countermeasures to protect voice-activated systems from malicious exploitation of this vulnerability.
arXiv Detail & Related papers (2024-04-07T00:49:19Z) - A Practical Survey on Emerging Threats from AI-driven Voice Attacks: How Vulnerable are Commercial Voice Control Systems? [13.115517847161428]
AI-driven audio attacks have revealed new security vulnerabilities in voice control systems.
Our study endeavors to assess the resilience of commercial voice control systems against a spectrum of malicious audio attacks.
Our results suggest that commercial voice control systems exhibit enhanced resistance to existing threats.
arXiv Detail & Related papers (2023-12-10T21:51:13Z) - Acoustic Cybersecurity: Exploiting Voice-Activated Systems [0.0]
Our research extends the feasibility of these attacks across various platforms like Amazon's Alexa, Android, iOS, and Cortana.
We quantitatively show that attack success rates hover around 60%, with the ability to activate devices remotely from over 100 feet away.
These attacks threaten critical infrastructure, emphasizing the need for multifaceted defensive strategies.
arXiv Detail & Related papers (2023-11-23T02:26:11Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Adversarial Attacks On Multi-Agent Communication [80.4392160849506]
Modern autonomous systems will soon be deployed at scale, opening up the possibility for cooperative multi-agent systems.
Such advantages rely heavily on communication channels which have been shown to be vulnerable to security breaches.
In this paper, we explore such adversarial attacks in a novel multi-agent setting where agents communicate by sharing learned intermediate representations.
arXiv Detail & Related papers (2021-01-17T00:35:26Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Adversarial Attack and Defense Strategies for Deep Speaker Recognition
Systems [44.305353565981015]
This paper considers several state-of-the-art adversarial attacks to a deep speaker recognition system, employing strong defense methods as countermeasures.
Experiments show that the speaker recognition systems are vulnerable to adversarial attacks, and the strongest attacks can reduce the accuracy of the system from 94% to even 0%.
arXiv Detail & Related papers (2020-08-18T00:58:19Z) - SoK: The Faults in our ASRs: An Overview of Attacks against Automatic
Speech Recognition and Speaker Identification Systems [28.635467696564703]
We show that the end-to-end architecture of speech and speaker systems makes attacks and defenses against them substantially different than those in the image space.
We then demonstrate experimentally that attacks against these models almost universally fail to transfer.
arXiv Detail & Related papers (2020-07-13T18:52:25Z) - Enhanced Adversarial Strategically-Timed Attacks against Deep
Reinforcement Learning [91.13113161754022]
We introduce timing-based adversarial strategies against a DRL-based navigation system by jamming in physical noise patterns on the selected time frames.
Our experimental results show that the adversarial timing attacks can lead to a significant performance drop.
arXiv Detail & Related papers (2020-02-20T21:39:25Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.