SoK: A Modularized Approach to Study the Security of Automatic Speech
Recognition Systems
- URL: http://arxiv.org/abs/2103.10651v1
- Date: Fri, 19 Mar 2021 06:24:04 GMT
- Title: SoK: A Modularized Approach to Study the Security of Automatic Speech
Recognition Systems
- Authors: Yuxuan Chen, Jiangshan Zhang, Xuejing Yuan, Shengzhi Zhang, Kai Chen,
Xiaofeng Wang and Shanqing Guo
- Abstract summary: We present our systematization of knowledge for ASR security and provide a comprehensive taxonomy for existing work based on a modularized workflow.
We align the research in this domain with that on security in Image Recognition System (IRS), which has been extensively studied.
Their similarities allow us to systematically study existing literature in ASR security based on the spectrum of attacks and defense solutions proposed for IRS.
In contrast, their differences, especially the complexity of ASR compared with IRS, help us learn unique challenges and opportunities in ASR security.
- Score: 13.553395767144284
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the wide use of Automatic Speech Recognition (ASR) in applications such
as human machine interaction, simultaneous interpretation, audio transcription,
etc., its security protection becomes increasingly important. Although recent
studies have brought to light the weaknesses of popular ASR systems that enable
out-of-band signal attack, adversarial attack, etc., and further proposed
various remedies (signal smoothing, adversarial training, etc.), a systematic
understanding of ASR security (both attacks and defenses) is still missing,
especially on how realistic such threats are and how general existing
protection could be. In this paper, we present our systematization of knowledge
for ASR security and provide a comprehensive taxonomy for existing work based
on a modularized workflow. More importantly, we align the research in this
domain with that on security in Image Recognition System (IRS), which has been
extensively studied, using the domain knowledge in the latter to help
understand where we stand in the former. Generally, both IRS and ASR are
perceptual systems. Their similarities allow us to systematically study
existing literature in ASR security based on the spectrum of attacks and
defense solutions proposed for IRS, and pinpoint the directions of more
advanced attacks and the directions potentially leading to more effective
protection in ASR. In contrast, their differences, especially the complexity of
ASR compared with IRS, help us learn unique challenges and opportunities in ASR
security. Particularly, our experimental study shows that transfer learning
across ASR models is feasible, even in the absence of knowledge about models
(even their types) and training data.
Related papers
- Cross-Modality Safety Alignment [73.8765529028288]
We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment.
To empirically investigate this problem, we developed the SIUO, a cross-modality benchmark encompassing 9 critical safety domains, such as self-harm, illegal activities, and privacy violations.
Our findings reveal substantial safety vulnerabilities in both closed- and open-source LVLMs, underscoring the inadequacy of current models to reliably interpret and respond to complex, real-world scenarios.
arXiv Detail & Related papers (2024-06-21T16:14:15Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - Hierarchical Framework for Interpretable and Probabilistic Model-Based
Safe Reinforcement Learning [1.3678669691302048]
This paper proposes a novel approach for the use of deep reinforcement learning in safety-critical systems.
It combines the advantages of probabilistic modeling and reinforcement learning with the added benefits of interpretability.
arXiv Detail & Related papers (2023-10-28T20:30:57Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - On the Security Risks of Knowledge Graph Reasoning [71.64027889145261]
We systematize the security threats to KGR according to the adversary's objectives, knowledge, and attack vectors.
We present ROAR, a new class of attacks that instantiate a variety of such threats.
We explore potential countermeasures against ROAR, including filtering of potentially poisoning knowledge and training with adversarially augmented queries.
arXiv Detail & Related papers (2023-05-03T18:47:42Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Liability regimes in the age of AI: a use-case driven analysis of the
burden of proof [1.7510020208193926]
New emerging technologies powered by Artificial Intelligence (AI) have the potential to disruptively transform our societies for the better.
But there is growing concerns about certain intrinsic characteristics of these methodologies that carry potential risks to both safety and fundamental rights.
This paper presents three case studies, as well as the methodology to reach them, that illustrate these difficulties.
arXiv Detail & Related papers (2022-11-03T13:55:36Z) - Watch What You Pretrain For: Targeted, Transferable Adversarial Examples
on Self-Supervised Speech Recognition models [27.414693266500603]
A targeted adversarial attack produces audio samples that can force an Automatic Speech Recognition system to output attacker-chosen text.
Recent work has shown that transferability against large ASR models is very difficult.
We show that modern ASR architectures, specifically ones based on Self-Supervised Learning, are in fact vulnerable to transferability.
arXiv Detail & Related papers (2022-09-17T15:01:26Z) - Synergistic Redundancy: Towards Verifiable Safety for Autonomous
Vehicles [10.277825331268179]
We propose Synergistic Redundancy (SR) a safety architecture for complex cyber physical systems, like Autonomous Vehicle (AV)
SR provides verifiable safety guarantees against specific faults by decoupling the mission and safety tasks of the system.
Close coordination with the mission layer allows easier and early detection of safety critical faults in the system.
arXiv Detail & Related papers (2022-09-04T23:52:03Z) - Adversarial Attacks on ASR Systems: An Overview [21.042752545701976]
In past few years, there are a lot of works on adversarial examples attacks against ASR systems.
In this paper, we describe the development of ASR system, different assumptions of attacks, and how to evaluate these attacks.
arXiv Detail & Related papers (2022-08-03T06:46:42Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.