Beyond Silence: Bias Analysis through Loss and Asymmetric Approach in Audio Anti-Spoofing
- URL: http://arxiv.org/abs/2406.17246v2
- Date: Mon, 26 Aug 2024 14:56:06 GMT
- Title: Beyond Silence: Bias Analysis through Loss and Asymmetric Approach in Audio Anti-Spoofing
- Authors: Hye-jin Shim, Md Sahidullah, Jee-weon Jung, Shinji Watanabe, Tomi Kinnunen,
- Abstract summary: Current trends in anti-spoofing detection research strive to improve models' ability to generalize across unseen attacks.
Recent studies have noted that the distribution of silence differs between the two classes, which can serve as a shortcut.
We employ loss analysis and asymmetric methodologies to move away from traditional attack-focused and result-oriented evaluations.
- Score: 53.325039475118814
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Current trends in audio anti-spoofing detection research strive to improve models' ability to generalize across unseen attacks by learning to identify a variety of spoofing artifacts. This emphasis has primarily focused on the spoof class. Recently, several studies have noted that the distribution of silence differs between the two classes, which can serve as a shortcut. In this paper, we extend class-wise interpretations beyond silence. We employ loss analysis and asymmetric methodologies to move away from traditional attack-focused and result-oriented evaluations towards a deeper examination of model behaviors. Our investigations highlight the significant differences in training dynamics between the two classes, emphasizing the need for future research to focus on robust modeling of the bonafide class.
Related papers
- Evasion Attacks Against Bayesian Predictive Models [1.8570591025615457]
This paper introduces a general methodology for designing optimal evasion attacks against such models.<n>We investigate two adversarial objectives: perturbing specific point predictions and altering the entire posterior predictive distribution.<n>For both scenarios, we propose novel gradient-based attacks and study their implementation and properties in various computational setups.
arXiv Detail & Related papers (2025-06-11T11:53:20Z) - Sustainable Self-evolution Adversarial Training [51.25767996364584]
We propose a Sustainable Self-Evolution Adversarial Training (SSEAT) framework for adversarial training defense models.
We introduce a continual adversarial defense pipeline to realize learning from various kinds of adversarial examples.
We also propose an adversarial data replay module to better select more diverse and key relearning data.
arXiv Detail & Related papers (2024-12-03T08:41:11Z) - Examining Changes in Internal Representations of Continual Learning Models Through Tensor Decomposition [5.01338577379149]
Continual learning (CL) has spurred the development of several methods aimed at consolidating previous knowledge across sequential learning.
We propose a novel representation-based evaluation framework for CL models.
arXiv Detail & Related papers (2024-05-06T07:52:44Z) - Singular Regularization with Information Bottleneck Improves Model's
Adversarial Robustness [30.361227245739745]
Adversarial examples are one of the most severe threats to deep learning models.
We study adversarial information as unstructured noise, which does not have a clear pattern.
We propose a new module to regularize adversarial information and combine information bottleneck theory.
arXiv Detail & Related papers (2023-12-04T09:07:30Z) - Characterizing the temporal dynamics of universal speech representations
for generalizable deepfake detection [14.449940985934388]
Existing deepfake speech detection systems lack generalizability to unseen attacks.
Recent studies have explored the use of universal speech representations to tackle this issue.
We argue that characterizing the long-term temporal dynamics of these representations is crucial for generalizability.
arXiv Detail & Related papers (2023-09-15T01:37:45Z) - Comparative Evaluation of Recent Universal Adversarial Perturbations in
Image Classification [27.367498200911285]
The vulnerability of Convolutional Neural Networks (CNNs) to adversarial samples has recently garnered significant attention in the machine learning community.
Recent studies have unveiled the existence of universal adversarial perturbations (UAPs) that are image-agnostic and highly transferable across different CNN models.
arXiv Detail & Related papers (2023-06-20T03:29:05Z) - Adversarial Robustness of Deep Reinforcement Learning based Dynamic
Recommender Systems [50.758281304737444]
We propose to explore adversarial examples and attack detection on reinforcement learning-based interactive recommendation systems.
We first craft different types of adversarial examples by adding perturbations to the input and intervening on the casual factors.
Then, we augment recommendation systems by detecting potential attacks with a deep learning-based classifier based on the crafted data.
arXiv Detail & Related papers (2021-12-02T04:12:24Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - Searching for an Effective Defender: Benchmarking Defense against
Adversarial Word Substitution [83.84968082791444]
Deep neural networks are vulnerable to intentionally crafted adversarial examples.
Various methods have been proposed to defend against adversarial word-substitution attacks for neural NLP models.
arXiv Detail & Related papers (2021-08-29T08:11:36Z) - Towards Defending against Adversarial Examples via Attack-Invariant
Features [147.85346057241605]
Deep neural networks (DNNs) are vulnerable to adversarial noise.
adversarial robustness can be improved by exploiting adversarial examples.
Models trained on seen types of adversarial examples generally cannot generalize well to unseen types of adversarial examples.
arXiv Detail & Related papers (2021-06-09T12:49:54Z) - Few-shot Action Recognition with Prototype-centered Attentive Learning [88.10852114988829]
Prototype-centered Attentive Learning (PAL) model composed of two novel components.
First, a prototype-centered contrastive learning loss is introduced to complement the conventional query-centered learning objective.
Second, PAL integrates a attentive hybrid learning mechanism that can minimize the negative impacts of outliers.
arXiv Detail & Related papers (2021-01-20T11:48:12Z) - A Deep Marginal-Contrastive Defense against Adversarial Attacks on 1D
Models [3.9962751777898955]
Deep learning algorithms have been recently targeted by attackers due to their vulnerability.
Non-continuous deep models are still not robust against adversarial attacks.
We propose a novel objective/loss function, which enforces the features to lie under a specified margin to facilitate their prediction.
arXiv Detail & Related papers (2020-12-08T20:51:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.