Impact of Spatial Frequency Based Constraints on Adversarial Robustness
- URL: http://arxiv.org/abs/2104.12679v3
- Date: Thu, 16 Nov 2023 14:13:12 GMT
- Title: Impact of Spatial Frequency Based Constraints on Adversarial Robustness
- Authors: R\'emi Bernhard, Pierre-Alain Moellic, Martial Mermillod, Yannick
Bourrier, Romain Cohendet, Miguel Solinas, Marina Reyboz
- Abstract summary: Adversarial examples mainly exploit changes to input pixels to which humans are not sensitive, and arise from the fact that models make decisions based on uninterpretable features.
In this paper, we investigate the robustness to adversarial perturbations of models enforced during training to leverage information corresponding to different spatial frequency ranges.
- Score: 0.49478969093606673
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial examples mainly exploit changes to input pixels to which humans
are not sensitive to, and arise from the fact that models make decisions based
on uninterpretable features. Interestingly, cognitive science reports that the
process of interpretability for human classification decision relies
predominantly on low spatial frequency components. In this paper, we
investigate the robustness to adversarial perturbations of models enforced
during training to leverage information corresponding to different spatial
frequency ranges. We show that it is tightly linked to the spatial frequency
characteristics of the data at stake. Indeed, depending on the data set, the
same constraint may results in very different level of robustness (up to 0.41
adversarial accuracy difference). To explain this phenomenon, we conduct
several experiments to enlighten influential factors such as the level of
sensitivity to high frequencies, and the transferability of adversarial
perturbations between original and low-pass filtered inputs.
Related papers
- Evaluating ML Robustness in GNSS Interference Classification, Characterization \& Localization [42.14439854721613]
Jamming devices present a significant threat by disrupting signals from the global navigation satellite system (GNSS)
The detection of anomalies within frequency snapshots is crucial to counteract these interferences effectively.
This paper introduces an extensive dataset capturing interferences within a large-scale environment including controlled multipath effects.
arXiv Detail & Related papers (2024-09-23T15:20:33Z) - Towards a Novel Perspective on Adversarial Examples Driven by Frequency [7.846634028066389]
We propose a black-box adversarial attack algorithm based on combining different frequency bands.
Experiments conducted on multiple datasets and models demonstrate that combining low-frequency bands and high-frequency components of low-frequency bands can significantly enhance attack efficiency.
arXiv Detail & Related papers (2024-04-16T00:58:46Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - How adversarial attacks can disrupt seemingly stable accurate classifiers [76.95145661711514]
Adversarial attacks dramatically change the output of an otherwise accurate learning system using a seemingly inconsequential modification to a piece of input data.
Here, we show that this may be seen as a fundamental feature of classifiers working with high dimensional input data.
We introduce a simple generic and generalisable framework for which key behaviours observed in practical systems arise with high probability.
arXiv Detail & Related papers (2023-09-07T12:02:00Z) - A Novel Loss Function Utilizing Wasserstein Distance to Reduce
Subject-Dependent Noise for Generalizable Models in Affective Computing [0.4818210066519976]
Emotions are an essential part of human behavior that can impact thinking, decision-making, and communication skills.
The ability to accurately monitor and identify emotions can be useful in many human-centered applications such as behavioral training, tracking emotional well-being, and development of human-computer interfaces.
arXiv Detail & Related papers (2023-08-17T01:15:26Z) - Towards Building More Robust Models with Frequency Bias [8.510441741759758]
This paper presents a plug-and-play module that adaptively reconfigures the low- and high-frequency components of intermediate feature representations.
Empirical studies show that our proposed module can be easily incorporated into any adversarial training framework.
arXiv Detail & Related papers (2023-07-19T05:46:56Z) - Spatial-Frequency Discriminability for Revealing Adversarial Perturbations [53.279716307171604]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition for natural and adversarial data.
We propose a discriminative detector relying on a spatial-frequency Krawtchouk decomposition.
arXiv Detail & Related papers (2023-05-18T10:18:59Z) - Architectural Optimization and Feature Learning for High-Dimensional
Time Series Datasets [0.7388859384645262]
We study the problem of predicting the presence of transient noise artifacts in a gravitational wave detector.
We introduce models that reduce the error rate by over 60% compared to the previous state of the art.
arXiv Detail & Related papers (2022-02-27T23:41:23Z) - A Frequency Perspective of Adversarial Robustness [72.48178241090149]
We present a frequency-based understanding of adversarial examples, supported by theoretical and empirical findings.
Our analysis shows that adversarial examples are neither in high-frequency nor in low-frequency components, but are simply dataset dependent.
We propose a frequency-based explanation for the commonly observed accuracy vs. robustness trade-off.
arXiv Detail & Related papers (2021-10-26T19:12:34Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Learning perturbation sets for robust machine learning [97.6757418136662]
We use a conditional generator that defines the perturbation set over a constrained region of the latent space.
We measure the quality of our learned perturbation sets both quantitatively and qualitatively.
We leverage our learned perturbation sets to train models which are empirically and certifiably robust to adversarial image corruptions and adversarial lighting variations.
arXiv Detail & Related papers (2020-07-16T16:39:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.