A Frequency Perspective of Adversarial Robustness
- URL: http://arxiv.org/abs/2111.00861v1
- Date: Tue, 26 Oct 2021 19:12:34 GMT
- Title: A Frequency Perspective of Adversarial Robustness
- Authors: Shishira R Maiya, Max Ehrlich, Vatsal Agarwal, Ser-Nam Lim, Tom
Goldstein, Abhinav Shrivastava
- Abstract summary: We present a frequency-based understanding of adversarial examples, supported by theoretical and empirical findings.
Our analysis shows that adversarial examples are neither in high-frequency nor in low-frequency components, but are simply dataset dependent.
We propose a frequency-based explanation for the commonly observed accuracy vs. robustness trade-off.
- Score: 72.48178241090149
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial examples pose a unique challenge for deep learning systems.
Despite recent advances in both attacks and defenses, there is still a lack of
clarity and consensus in the community about the true nature and underlying
properties of adversarial examples. A deep understanding of these examples can
provide new insights towards the development of more effective attacks and
defenses. Driven by the common misconception that adversarial examples are
high-frequency noise, we present a frequency-based understanding of adversarial
examples, supported by theoretical and empirical findings. Our analysis shows
that adversarial examples are neither in high-frequency nor in low-frequency
components, but are simply dataset dependent. Particularly, we highlight the
glaring disparities between models trained on CIFAR-10 and ImageNet-derived
datasets. Utilizing this framework, we analyze many intriguing properties of
training robust models with frequency constraints, and propose a
frequency-based explanation for the commonly observed accuracy vs. robustness
trade-off.
Related papers
- Regularized Contrastive Partial Multi-view Outlier Detection [76.77036536484114]
We propose a novel method named Regularized Contrastive Partial Multi-view Outlier Detection (RCPMOD)
In this framework, we utilize contrastive learning to learn view-consistent information and distinguish outliers by the degree of consistency.
Experimental results on four benchmark datasets demonstrate that our proposed approach could outperform state-of-the-art competitors.
arXiv Detail & Related papers (2024-08-02T14:34:27Z) - Towards a Novel Perspective on Adversarial Examples Driven by Frequency [7.846634028066389]
We propose a black-box adversarial attack algorithm based on combining different frequency bands.
Experiments conducted on multiple datasets and models demonstrate that combining low-frequency bands and high-frequency components of low-frequency bands can significantly enhance attack efficiency.
arXiv Detail & Related papers (2024-04-16T00:58:46Z) - Robustness of Deep Neural Networks for Micro-Doppler Radar
Classification [1.3654846342364308]
Two deep convolutional architectures, trained and tested on the same data, are evaluated.
Models are susceptible to adversarial examples.
cadence-velocity diagram representation rather than Doppler-time are demonstrated to be naturally more immune to adversarial examples.
arXiv Detail & Related papers (2024-02-21T09:37:17Z) - A Training Rate and Survival Heuristic for Inference and Robustness Evaluation (TRASHFIRE) [1.622320874892682]
This work addresses the problem of understanding and predicting how particular model hyper- parameters influence the performance of a model in the presence of an adversary.
The proposed approach uses survival models, worst-case examples, and a cost-aware analysis to precisely and accurately reject a particular model change.
Using the proposed methodology, we show that ResNet is hopelessly against even the simplest of white box attacks.
arXiv Detail & Related papers (2024-01-24T19:12:37Z) - Towards Building More Robust Models with Frequency Bias [8.510441741759758]
This paper presents a plug-and-play module that adaptively reconfigures the low- and high-frequency components of intermediate feature representations.
Empirical studies show that our proposed module can be easily incorporated into any adversarial training framework.
arXiv Detail & Related papers (2023-07-19T05:46:56Z) - Spatial-Frequency Discriminability for Revealing Adversarial Perturbations [53.279716307171604]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition for natural and adversarial data.
We propose a discriminative detector relying on a spatial-frequency Krawtchouk decomposition.
arXiv Detail & Related papers (2023-05-18T10:18:59Z) - Latent Boundary-guided Adversarial Training [61.43040235982727]
Adrial training is proved to be the most effective strategy that injects adversarial examples into model training.
We propose a novel adversarial training framework called LAtent bounDary-guided aDvErsarial tRaining.
arXiv Detail & Related papers (2022-06-08T07:40:55Z) - Harnessing Perceptual Adversarial Patches for Crowd Counting [92.79051296850405]
Crowd counting is vulnerable to adversarial examples in the physical world.
This paper proposes the Perceptual Adrial Patch (PAP) generation framework to learn the shared perceptual features between models.
arXiv Detail & Related papers (2021-09-16T13:51:39Z) - From a Fourier-Domain Perspective on Adversarial Examples to a Wiener
Filter Defense for Semantic Segmentation [27.04820989579924]
deep neural networks are not robust against adversarial perturbations.
In this work, we study the adversarial problem from a frequency domain perspective.
We propose an adversarial defense method based on the well-known Wiener filters.
arXiv Detail & Related papers (2020-12-02T22:06:04Z) - WaveTransform: Crafting Adversarial Examples via Input Decomposition [69.01794414018603]
We introduce WaveTransform', that creates adversarial noise corresponding to low-frequency and high-frequency subbands, separately (or in combination)
Experiments show that the proposed attack is effective against the defense algorithm and is also transferable across CNNs.
arXiv Detail & Related papers (2020-10-29T17:16:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.