Towards Building More Robust Models with Frequency Bias
- URL: http://arxiv.org/abs/2307.09763v2
- Date: Fri, 28 Jul 2023 01:41:13 GMT
- Title: Towards Building More Robust Models with Frequency Bias
- Authors: Qingwen Bu, Dong Huang, Heming Cui
- Abstract summary: This paper presents a plug-and-play module that adaptively reconfigures the low- and high-frequency components of intermediate feature representations.
Empirical studies show that our proposed module can be easily incorporated into any adversarial training framework.
- Score: 8.510441741759758
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The vulnerability of deep neural networks to adversarial samples has been a
major impediment to their broad applications, despite their success in various
fields. Recently, some works suggested that adversarially-trained models
emphasize the importance of low-frequency information to achieve higher
robustness. While several attempts have been made to leverage this frequency
characteristic, they have all faced the issue that applying low-pass filters
directly to input images leads to irreversible loss of discriminative
information and poor generalizability to datasets with distinct frequency
features. This paper presents a plug-and-play module called the Frequency
Preference Control Module that adaptively reconfigures the low- and
high-frequency components of intermediate feature representations, providing
better utilization of frequency in robust learning. Empirical studies show that
our proposed module can be easily incorporated into any adversarial training
framework, further improving model robustness across different architectures
and datasets. Additionally, experiments were conducted to examine how the
frequency bias of robust models impacts the adversarial training process and
its final robustness, revealing interesting insights.
Related papers
- Mitigating Low-Frequency Bias: Feature Recalibration and Frequency Attention Regularization for Adversarial Robustness [23.77988226456179]
This paper proposes a novel module called High-Frequency Feature Disentanglement and Recalibration (HFDR)
HFDR separates features into high-frequency and low-frequency components and recalibrates the high-frequency feature to capture latent useful semantics.
Extensive experiments showcase the immense potential and superiority of our approach in resisting various white-box attacks, transfer attacks, and showcasing strong generalization capabilities.
arXiv Detail & Related papers (2024-07-04T15:46:01Z) - Towards a Novel Perspective on Adversarial Examples Driven by Frequency [7.846634028066389]
We propose a black-box adversarial attack algorithm based on combining different frequency bands.
Experiments conducted on multiple datasets and models demonstrate that combining low-frequency bands and high-frequency components of low-frequency bands can significantly enhance attack efficiency.
arXiv Detail & Related papers (2024-04-16T00:58:46Z) - How adversarial attacks can disrupt seemingly stable accurate classifiers [76.95145661711514]
Adversarial attacks dramatically change the output of an otherwise accurate learning system using a seemingly inconsequential modification to a piece of input data.
Here, we show that this may be seen as a fundamental feature of classifiers working with high dimensional input data.
We introduce a simple generic and generalisable framework for which key behaviours observed in practical systems arise with high probability.
arXiv Detail & Related papers (2023-09-07T12:02:00Z) - Frequency Domain Adversarial Training for Robust Volumetric Medical
Segmentation [111.61781272232646]
It is imperative to ensure the robustness of deep learning models in critical applications such as, healthcare.
We present a 3D frequency domain adversarial attack for volumetric medical image segmentation models.
arXiv Detail & Related papers (2023-07-14T10:50:43Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Phase-shifted Adversarial Training [8.89749787668458]
We analyze the behavior of adversarial training through the lens of response frequency.
PhaseAT significantly improves the convergence for high-frequency information.
This results in improved adversarial robustness by enabling the model to have smoothed predictions near each data.
arXiv Detail & Related papers (2023-01-12T02:25:22Z) - How Does Frequency Bias Affect the Robustness of Neural Image
Classifiers against Common Corruption and Adversarial Perturbations? [27.865987936475797]
Recent studies have shown that data augmentation can result in model over-relying on features in the low-frequency domain.
We propose Jacobian frequency regularization for models' Jacobians to have a larger ratio of low-frequency components.
Our approach elucidates a more direct connection between the frequency bias and robustness of deep learning models.
arXiv Detail & Related papers (2022-05-09T20:09:31Z) - Certified Adversarial Defenses Meet Out-of-Distribution Corruptions:
Benchmarking Robustness and Simple Baselines [65.0803400763215]
This work critically examines how adversarial robustness guarantees change when state-of-the-art certifiably robust models encounter out-of-distribution data.
We propose a novel data augmentation scheme, FourierMix, that produces augmentations to improve the spectral coverage of the training data.
We find that FourierMix augmentations help eliminate the spectral bias of certifiably robust models enabling them to achieve significantly better robustness guarantees on a range of OOD benchmarks.
arXiv Detail & Related papers (2021-12-01T17:11:22Z) - A Frequency Perspective of Adversarial Robustness [72.48178241090149]
We present a frequency-based understanding of adversarial examples, supported by theoretical and empirical findings.
Our analysis shows that adversarial examples are neither in high-frequency nor in low-frequency components, but are simply dataset dependent.
We propose a frequency-based explanation for the commonly observed accuracy vs. robustness trade-off.
arXiv Detail & Related papers (2021-10-26T19:12:34Z) - Frequency-based Automated Modulation Classification in the Presence of
Adversaries [17.930854969511046]
We present a novel receiver architecture consisting of deep learning models capable of withstanding transferable adversarial interference.
In this work, we demonstrate classification performance improvements greater than 30% on recurrent neural networks (RNNs) and greater than 50% on convolutional neural networks (CNNs)
arXiv Detail & Related papers (2020-11-02T17:12:22Z) - WaveTransform: Crafting Adversarial Examples via Input Decomposition [69.01794414018603]
We introduce WaveTransform', that creates adversarial noise corresponding to low-frequency and high-frequency subbands, separately (or in combination)
Experiments show that the proposed attack is effective against the defense algorithm and is also transferable across CNNs.
arXiv Detail & Related papers (2020-10-29T17:16:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.