On the Exploitability of Audio Machine Learning Pipelines to
Surreptitious Adversarial Examples
- URL: http://arxiv.org/abs/2108.02010v1
- Date: Tue, 3 Aug 2021 16:21:08 GMT
- Title: On the Exploitability of Audio Machine Learning Pipelines to
Surreptitious Adversarial Examples
- Authors: Adelin Travers, Lorna Licollari, Guanghan Wang, Varun Chandrasekaran,
Adam Dziedzic, David Lie, Nicolas Papernot
- Abstract summary: We introduce surreptitious adversarial examples, a new class of attacks that evades both human and pipeline controls.
We show that this attack produces audio samples that are more surreptitious than previous attacks that aim solely for imperceptibility.
- Score: 19.433014444284595
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning (ML) models are known to be vulnerable to adversarial
examples. Applications of ML to voice biometrics authentication are no
exception. Yet, the implications of audio adversarial examples on these
real-world systems remain poorly understood given that most research targets
limited defenders who can only listen to the audio samples. Conflating
detectability of an attack with human perceptibility, research has focused on
methods that aim to produce imperceptible adversarial examples which humans
cannot distinguish from the corresponding benign samples. We argue that this
perspective is coarse for two reasons: 1. Imperceptibility is impossible to
verify; it would require an experimental process that encompasses variations in
listener training, equipment, volume, ear sensitivity, types of background
noise etc, and 2. It disregards pipeline-based detection clues that realistic
defenders leverage. This results in adversarial examples that are ineffective
in the presence of knowledgeable defenders. Thus, an adversary only needs an
audio sample to be plausible to a human. We thus introduce surreptitious
adversarial examples, a new class of attacks that evades both human and
pipeline controls. In the white-box setting, we instantiate this class with a
joint, multi-stage optimization attack. Using an Amazon Mechanical Turk user
study, we show that this attack produces audio samples that are more
surreptitious than previous attacks that aim solely for imperceptibility.
Lastly we show that surreptitious adversarial examples are challenging to
develop in the black-box setting.
Related papers
- Among Us: Adversarially Robust Collaborative Perception by Consensus [50.73128191202585]
Multiple robots could perceive a scene (e.g., detect objects) collaboratively better than individuals.
We propose ROBOSAC, a novel sampling-based defense strategy generalizable to unseen attackers.
We validate our method on the task of collaborative 3D object detection in autonomous driving scenarios.
arXiv Detail & Related papers (2023-03-16T17:15:25Z) - Push-Pull: Characterizing the Adversarial Robustness for Audio-Visual
Active Speaker Detection [88.74863771919445]
We reveal the vulnerability of AVASD models under audio-only, visual-only, and audio-visual adversarial attacks.
We also propose a novel audio-visual interaction loss (AVIL) for making attackers difficult to find feasible adversarial examples.
arXiv Detail & Related papers (2022-10-03T08:10:12Z) - Rethinking Textual Adversarial Defense for Pre-trained Language Models [79.18455635071817]
A literature review shows that pre-trained language models (PrLMs) are vulnerable to adversarial attacks.
We propose a novel metric (Degree of Anomaly) to enable current adversarial attack approaches to generate more natural and imperceptible adversarial examples.
We show that our universal defense framework achieves comparable or even higher after-attack accuracy with other specific defenses.
arXiv Detail & Related papers (2022-07-21T07:51:45Z) - Tubes Among Us: Analog Attack on Automatic Speaker Identification [37.42266692664095]
We show that a human is capable of producing analog adversarial examples directly with little cost and supervision.
Our findings extend to a range of other acoustic-biometric tasks such as liveness detection, bringing into question their use in security-critical settings in real life.
arXiv Detail & Related papers (2022-02-06T10:33:13Z) - Adversarial Robustness of Deep Reinforcement Learning based Dynamic
Recommender Systems [50.758281304737444]
We propose to explore adversarial examples and attack detection on reinforcement learning-based interactive recommendation systems.
We first craft different types of adversarial examples by adding perturbations to the input and intervening on the casual factors.
Then, we augment recommendation systems by detecting potential attacks with a deep learning-based classifier based on the crafted data.
arXiv Detail & Related papers (2021-12-02T04:12:24Z) - Improving the Adversarial Robustness for Speaker Verification by Self-Supervised Learning [95.60856995067083]
This work is among the first to perform adversarial defense for ASV without knowing the specific attack algorithms.
We propose to perform adversarial defense from two perspectives: 1) adversarial perturbation purification and 2) adversarial perturbation detection.
Experimental results show that our detection module effectively shields the ASV by detecting adversarial samples with an accuracy of around 80%.
arXiv Detail & Related papers (2021-06-01T07:10:54Z) - Removing Adversarial Noise in Class Activation Feature Space [160.78488162713498]
We propose to remove adversarial noise by implementing a self-supervised adversarial training mechanism in a class activation feature space.
We train a denoising model to minimize the distances between the adversarial examples and the natural examples in the class activation feature space.
Empirical evaluations demonstrate that our method could significantly enhance adversarial robustness in comparison to previous state-of-the-art approaches.
arXiv Detail & Related papers (2021-04-19T10:42:24Z) - Dompteur: Taming Audio Adversarial Examples [28.54699912239861]
Adversarial examples allow attackers to arbitrarily manipulate machine learning systems.
In this paper we propose a different perspective: We accept the presence of adversarial examples against ASR systems, but we require them to be perceivable by human listeners.
By applying the principles of psychoacoustics, we can remove semantically irrelevant information from the ASR input and train a model that resembles human perception more closely.
arXiv Detail & Related papers (2021-02-10T13:53:32Z) - On the human evaluation of audio adversarial examples [1.7006003864727404]
adversarial examples are inputs intentionally perturbed to produce a wrong prediction without being noticed.
High fooling rates of proposed adversarial perturbation strategies are only valuable if the perturbations are not detectable.
We demonstrate that the metrics employed by convention are not a reliable measure of the perceptual similarity of adversarial examples in the audio domain.
arXiv Detail & Related papers (2020-01-23T10:56:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.