From a Fourier-Domain Perspective on Adversarial Examples to a Wiener
Filter Defense for Semantic Segmentation
- URL: http://arxiv.org/abs/2012.01558v2
- Date: Wed, 21 Apr 2021 15:44:10 GMT
- Title: From a Fourier-Domain Perspective on Adversarial Examples to a Wiener
Filter Defense for Semantic Segmentation
- Authors: Nikhil Kapoor, Andreas B\"ar, Serin Varghese, Jan David Schneider,
Fabian H\"uger, Peter Schlicht, Tim Fingscheidt
- Abstract summary: deep neural networks are not robust against adversarial perturbations.
In this work, we study the adversarial problem from a frequency domain perspective.
We propose an adversarial defense method based on the well-known Wiener filters.
- Score: 27.04820989579924
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite recent advancements, deep neural networks are not robust against
adversarial perturbations. Many of the proposed adversarial defense approaches
use computationally expensive training mechanisms that do not scale to complex
real-world tasks such as semantic segmentation, and offer only marginal
improvements. In addition, fundamental questions on the nature of adversarial
perturbations and their relation to the network architecture are largely
understudied. In this work, we study the adversarial problem from a frequency
domain perspective. More specifically, we analyze discrete Fourier transform
(DFT) spectra of several adversarial images and report two major findings:
First, there exists a strong connection between a model architecture and the
nature of adversarial perturbations that can be observed and addressed in the
frequency domain. Second, the observed frequency patterns are largely image-
and attack-type independent, which is important for the practical impact of any
defense making use of such patterns. Motivated by these findings, we
additionally propose an adversarial defense method based on the well-known
Wiener filters that captures and suppresses adversarial frequencies in a
data-driven manner. Our proposed method not only generalizes across unseen
attacks but also beats five existing state-of-the-art methods across two models
in a variety of attack settings.
Related papers
- FACL-Attack: Frequency-Aware Contrastive Learning for Transferable Adversarial Attacks [42.18755809782401]
Deep neural networks are known to be vulnerable to security risks due to the inherent transferable nature of adversarial examples.
We propose a feature contrastive approach in the frequency domain to generate adversarial examples that are robust in both cross-domain and cross-model settings.
We demonstrate strong transferability of our generated adversarial perturbations through extensive cross-domain and cross-model experiments.
arXiv Detail & Related papers (2024-07-30T08:50:06Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - A Survey on Transferability of Adversarial Examples across Deep Neural Networks [53.04734042366312]
adversarial examples can manipulate machine learning models into making erroneous predictions.
The transferability of adversarial examples enables black-box attacks which circumvent the need for detailed knowledge of the target model.
This survey explores the landscape of the adversarial transferability of adversarial examples.
arXiv Detail & Related papers (2023-10-26T17:45:26Z) - Spatial-Frequency Discriminability for Revealing Adversarial Perturbations [53.279716307171604]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition for natural and adversarial data.
We propose a discriminative detector relying on a spatial-frequency Krawtchouk decomposition.
arXiv Detail & Related papers (2023-05-18T10:18:59Z) - Resisting Adversarial Attacks in Deep Neural Networks using Diverse
Decision Boundaries [12.312877365123267]
Deep learning systems are vulnerable to crafted adversarial examples, which may be imperceptible to the human eye, but can lead the model to misclassify.
We develop a new ensemble-based solution that constructs defender models with diverse decision boundaries with respect to the original model.
We present extensive experimentations using standard image classification datasets, namely MNIST, CIFAR-10 and CIFAR-100 against state-of-the-art adversarial attacks.
arXiv Detail & Related papers (2022-08-18T08:19:26Z) - A Frequency Perspective of Adversarial Robustness [72.48178241090149]
We present a frequency-based understanding of adversarial examples, supported by theoretical and empirical findings.
Our analysis shows that adversarial examples are neither in high-frequency nor in low-frequency components, but are simply dataset dependent.
We propose a frequency-based explanation for the commonly observed accuracy vs. robustness trade-off.
arXiv Detail & Related papers (2021-10-26T19:12:34Z) - WaveTransform: Crafting Adversarial Examples via Input Decomposition [69.01794414018603]
We introduce WaveTransform', that creates adversarial noise corresponding to low-frequency and high-frequency subbands, separately (or in combination)
Experiments show that the proposed attack is effective against the defense algorithm and is also transferable across CNNs.
arXiv Detail & Related papers (2020-10-29T17:16:59Z) - Detection Defense Against Adversarial Attacks with Saliency Map [7.736844355705379]
It is well established that neural networks are vulnerable to adversarial examples, which are almost imperceptible on human vision.
Existing defenses are trend to harden the robustness of models against adversarial attacks.
We propose a novel method combined with additional noises and utilize the inconsistency strategy to detect adversarial examples.
arXiv Detail & Related papers (2020-09-06T13:57:17Z) - Learning to Generate Noise for Multi-Attack Robustness [126.23656251512762]
Adversarial learning has emerged as one of the successful techniques to circumvent the susceptibility of existing methods against adversarial perturbations.
In safety-critical applications, this makes these methods extraneous as the attacker can adopt diverse adversaries to deceive the system.
We propose a novel meta-learning framework that explicitly learns to generate noise to improve the model's robustness against multiple types of attacks.
arXiv Detail & Related papers (2020-06-22T10:44:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.