SoK: A Modularized Approach to Study the Security of Automatic Speech
Recognition Systems
- URL: http://arxiv.org/abs/2103.10651v1
- Date: Fri, 19 Mar 2021 06:24:04 GMT
- Title: SoK: A Modularized Approach to Study the Security of Automatic Speech
Recognition Systems
- Authors: Yuxuan Chen, Jiangshan Zhang, Xuejing Yuan, Shengzhi Zhang, Kai Chen,
Xiaofeng Wang and Shanqing Guo
- Abstract summary: We present our systematization of knowledge for ASR security and provide a comprehensive taxonomy for existing work based on a modularized workflow.
We align the research in this domain with that on security in Image Recognition System (IRS), which has been extensively studied.
Their similarities allow us to systematically study existing literature in ASR security based on the spectrum of attacks and defense solutions proposed for IRS.
In contrast, their differences, especially the complexity of ASR compared with IRS, help us learn unique challenges and opportunities in ASR security.
- Score: 13.553395767144284
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the wide use of Automatic Speech Recognition (ASR) in applications such
as human machine interaction, simultaneous interpretation, audio transcription,
etc., its security protection becomes increasingly important. Although recent
studies have brought to light the weaknesses of popular ASR systems that enable
out-of-band signal attack, adversarial attack, etc., and further proposed
various remedies (signal smoothing, adversarial training, etc.), a systematic
understanding of ASR security (both attacks and defenses) is still missing,
especially on how realistic such threats are and how general existing
protection could be. In this paper, we present our systematization of knowledge
for ASR security and provide a comprehensive taxonomy for existing work based
on a modularized workflow. More importantly, we align the research in this
domain with that on security in Image Recognition System (IRS), which has been
extensively studied, using the domain knowledge in the latter to help
understand where we stand in the former. Generally, both IRS and ASR are
perceptual systems. Their similarities allow us to systematically study
existing literature in ASR security based on the spectrum of attacks and
defense solutions proposed for IRS, and pinpoint the directions of more
advanced attacks and the directions potentially leading to more effective
protection in ASR. In contrast, their differences, especially the complexity of
ASR compared with IRS, help us learn unique challenges and opportunities in ASR
security. Particularly, our experimental study shows that transfer learning
across ASR models is feasible, even in the absence of knowledge about models
(even their types) and training data.
Related papers
- In-Context Experience Replay Facilitates Safety Red-Teaming of Text-to-Image Diffusion Models [97.82118821263825]
Text-to-image (T2I) models have shown remarkable progress, but their potential to generate harmful content remains a critical concern in the ML community.
We propose ICER, a novel red-teaming framework that generates interpretable and semantic meaningful problematic prompts.
Our work provides crucial insights for developing more robust safety mechanisms in T2I systems.
arXiv Detail & Related papers (2024-11-25T04:17:24Z) - EAIRiskBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [47.69642609574771]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EAIRiskBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? [59.96471873997733]
We propose an empirical foundation for developing more meaningful safety metrics and define AI safety in a machine learning research context.
We aim to provide a more rigorous framework for AI safety research, advancing the science of safety evaluations and clarifying the path towards measurable progress.
arXiv Detail & Related papers (2024-07-31T17:59:24Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - Hierarchical Framework for Interpretable and Probabilistic Model-Based
Safe Reinforcement Learning [1.3678669691302048]
This paper proposes a novel approach for the use of deep reinforcement learning in safety-critical systems.
It combines the advantages of probabilistic modeling and reinforcement learning with the added benefits of interpretability.
arXiv Detail & Related papers (2023-10-28T20:30:57Z) - On the Security Risks of Knowledge Graph Reasoning [71.64027889145261]
We systematize the security threats to KGR according to the adversary's objectives, knowledge, and attack vectors.
We present ROAR, a new class of attacks that instantiate a variety of such threats.
We explore potential countermeasures against ROAR, including filtering of potentially poisoning knowledge and training with adversarially augmented queries.
arXiv Detail & Related papers (2023-05-03T18:47:42Z) - Liability regimes in the age of AI: a use-case driven analysis of the
burden of proof [1.7510020208193926]
New emerging technologies powered by Artificial Intelligence (AI) have the potential to disruptively transform our societies for the better.
But there is growing concerns about certain intrinsic characteristics of these methodologies that carry potential risks to both safety and fundamental rights.
This paper presents three case studies, as well as the methodology to reach them, that illustrate these difficulties.
arXiv Detail & Related papers (2022-11-03T13:55:36Z) - Watch What You Pretrain For: Targeted, Transferable Adversarial Examples
on Self-Supervised Speech Recognition models [27.414693266500603]
A targeted adversarial attack produces audio samples that can force an Automatic Speech Recognition system to output attacker-chosen text.
Recent work has shown that transferability against large ASR models is very difficult.
We show that modern ASR architectures, specifically ones based on Self-Supervised Learning, are in fact vulnerable to transferability.
arXiv Detail & Related papers (2022-09-17T15:01:26Z) - Synergistic Redundancy: Towards Verifiable Safety for Autonomous
Vehicles [10.277825331268179]
We propose Synergistic Redundancy (SR) a safety architecture for complex cyber physical systems, like Autonomous Vehicle (AV)
SR provides verifiable safety guarantees against specific faults by decoupling the mission and safety tasks of the system.
Close coordination with the mission layer allows easier and early detection of safety critical faults in the system.
arXiv Detail & Related papers (2022-09-04T23:52:03Z) - Adversarial Attacks on ASR Systems: An Overview [21.042752545701976]
In past few years, there are a lot of works on adversarial examples attacks against ASR systems.
In this paper, we describe the development of ASR system, different assumptions of attacks, and how to evaluate these attacks.
arXiv Detail & Related papers (2022-08-03T06:46:42Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.