How Does Frequency Bias Affect the Robustness of Neural Image
Classifiers against Common Corruption and Adversarial Perturbations?
- URL: http://arxiv.org/abs/2205.04533v1
- Date: Mon, 9 May 2022 20:09:31 GMT
- Title: How Does Frequency Bias Affect the Robustness of Neural Image
Classifiers against Common Corruption and Adversarial Perturbations?
- Authors: Alvin Chan, Yew-Soon Ong, Clement Tan
- Abstract summary: Recent studies have shown that data augmentation can result in model over-relying on features in the low-frequency domain.
We propose Jacobian frequency regularization for models' Jacobians to have a larger ratio of low-frequency components.
Our approach elucidates a more direct connection between the frequency bias and robustness of deep learning models.
- Score: 27.865987936475797
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Model robustness is vital for the reliable deployment of machine learning
models in real-world applications. Recent studies have shown that data
augmentation can result in model over-relying on features in the low-frequency
domain, sacrificing performance against low-frequency corruptions, highlighting
a connection between frequency and robustness. Here, we take one step further
to more directly study the frequency bias of a model through the lens of its
Jacobians and its implication to model robustness. To achieve this, we propose
Jacobian frequency regularization for models' Jacobians to have a larger ratio
of low-frequency components. Through experiments on four image datasets, we
show that biasing classifiers towards low (high)-frequency components can bring
performance gain against high (low)-frequency corruption and adversarial
perturbation, albeit with a tradeoff in performance for low (high)-frequency
corruption. Our approach elucidates a more direct connection between the
frequency bias and robustness of deep learning models.
Related papers
- Mitigating Low-Frequency Bias: Feature Recalibration and Frequency Attention Regularization for Adversarial Robustness [23.77988226456179]
This paper proposes a novel module called High-Frequency Feature Disentanglement and Recalibration (HFDR)
HFDR separates features into high-frequency and low-frequency components and recalibrates the high-frequency feature to capture latent useful semantics.
Extensive experiments showcase the immense potential and superiority of our approach in resisting various white-box attacks, transfer attacks, and showcasing strong generalization capabilities.
arXiv Detail & Related papers (2024-07-04T15:46:01Z) - Towards a Novel Perspective on Adversarial Examples Driven by Frequency [7.846634028066389]
We propose a black-box adversarial attack algorithm based on combining different frequency bands.
Experiments conducted on multiple datasets and models demonstrate that combining low-frequency bands and high-frequency components of low-frequency bands can significantly enhance attack efficiency.
arXiv Detail & Related papers (2024-04-16T00:58:46Z) - Blue noise for diffusion models [50.99852321110366]
We introduce a novel and general class of diffusion models taking correlated noise within and across images into account.
Our framework allows introducing correlation across images within a single mini-batch to improve gradient flow.
We perform both qualitative and quantitative evaluations on a variety of datasets using our method.
arXiv Detail & Related papers (2024-02-07T14:59:25Z) - Towards Building More Robust Models with Frequency Bias [8.510441741759758]
This paper presents a plug-and-play module that adaptively reconfigures the low- and high-frequency components of intermediate feature representations.
Empirical studies show that our proposed module can be easily incorporated into any adversarial training framework.
arXiv Detail & Related papers (2023-07-19T05:46:56Z) - Phase-shifted Adversarial Training [8.89749787668458]
We analyze the behavior of adversarial training through the lens of response frequency.
PhaseAT significantly improves the convergence for high-frequency information.
This results in improved adversarial robustness by enabling the model to have smoothed predictions near each data.
arXiv Detail & Related papers (2023-01-12T02:25:22Z) - Certified Adversarial Defenses Meet Out-of-Distribution Corruptions:
Benchmarking Robustness and Simple Baselines [65.0803400763215]
This work critically examines how adversarial robustness guarantees change when state-of-the-art certifiably robust models encounter out-of-distribution data.
We propose a novel data augmentation scheme, FourierMix, that produces augmentations to improve the spectral coverage of the training data.
We find that FourierMix augmentations help eliminate the spectral bias of certifiably robust models enabling them to achieve significantly better robustness guarantees on a range of OOD benchmarks.
arXiv Detail & Related papers (2021-12-01T17:11:22Z) - A Frequency Perspective of Adversarial Robustness [72.48178241090149]
We present a frequency-based understanding of adversarial examples, supported by theoretical and empirical findings.
Our analysis shows that adversarial examples are neither in high-frequency nor in low-frequency components, but are simply dataset dependent.
We propose a frequency-based explanation for the commonly observed accuracy vs. robustness trade-off.
arXiv Detail & Related papers (2021-10-26T19:12:34Z) - Improving robustness against common corruptions with frequency biased
models [112.65717928060195]
unseen image corruptions can cause a surprisingly large drop in performance.
Image corruption types have different characteristics in the frequency spectrum and would benefit from a targeted type of data augmentation.
We propose a new regularization scheme that minimizes the total variation (TV) of convolution feature-maps to increase high-frequency robustness.
arXiv Detail & Related papers (2021-03-30T10:44:50Z) - Focal Frequency Loss for Image Reconstruction and Synthesis [125.7135706352493]
We show that narrowing gaps in the frequency domain can ameliorate image reconstruction and synthesis quality further.
We propose a novel focal frequency loss, which allows a model to adaptively focus on frequency components that are hard to synthesize.
arXiv Detail & Related papers (2020-12-23T17:32:04Z) - Real Time Speech Enhancement in the Waveform Domain [99.02180506016721]
We present a causal speech enhancement model working on the raw waveform that runs in real-time on a laptop CPU.
The proposed model is based on an encoder-decoder architecture with skip-connections.
It is capable of removing various kinds of background noise including stationary and non-stationary noises.
arXiv Detail & Related papers (2020-06-23T09:19:13Z) - Towards Frequency-Based Explanation for Robust CNN [6.164771707307929]
We present an analysis of the connection between the distribution of frequency components in the input dataset and the reasoning process the model learns from the data.
We show that the vulnerability of the model against tiny distortions is a result of the model is relying on the high-frequency features.
arXiv Detail & Related papers (2020-05-06T21:22:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.