Mitigating Low-Frequency Bias: Feature Recalibration and Frequency Attention Regularization for Adversarial Robustness
- URL: http://arxiv.org/abs/2407.04016v2
- Date: Sun, 12 Jan 2025 15:24:23 GMT
- Title: Mitigating Low-Frequency Bias: Feature Recalibration and Frequency Attention Regularization for Adversarial Robustness
- Authors: Kejia Zhang, Juanjuan Weng, Yuanzheng Cai, Zhiming Luo, Shaozi Li,
- Abstract summary: adversarial training (AT) has emerged as a promising defense strategy.
AT-trained models exhibit a bias toward low-frequency features while neglecting high-frequency components.
We propose High-Frequency Feature Disentanglement and Recalibration (HFDR), a novel module that strategically separates and recalibrates frequency-specific features.
- Score: 23.77988226456179
- License:
- Abstract: Ensuring the robustness of deep neural networks against adversarial attacks remains a fundamental challenge in computer vision. While adversarial training (AT) has emerged as a promising defense strategy, our analysis reveals a critical limitation: AT-trained models exhibit a bias toward low-frequency features while neglecting high-frequency components. This bias is particularly concerning as each frequency component carries distinct and crucial information: low-frequency features encode fundamental structural patterns, while high-frequency features capture intricate details and textures. To address this limitation, we propose High-Frequency Feature Disentanglement and Recalibration (HFDR), a novel module that strategically separates and recalibrates frequency-specific features to capture latent semantic cues. We further introduce frequency attention regularization to harmonize feature extraction across the frequency spectrum and mitigate the inherent low-frequency bias of AT. Extensive experiments demonstrate our method's superior performance against white-box attacks and transfer attacks, while exhibiting strong generalization capabilities across diverse scenarios.
Related papers
- Sharpening Neural Implicit Functions with Frequency Consolidation Priors [53.6277160912059]
Signed Distance Functions (SDFs) are vital implicit representations to represent high fidelity 3D surfaces.
Current methods mainly leverage a neural network to learn an SDF from various supervisions including signed, 3D point clouds, or multi-view images.
We introduce a method to sharpen a low frequency SDF observation by recovering its high frequency components, pursuing a sharper and more complete surface.
arXiv Detail & Related papers (2024-12-27T16:18:46Z) - Tuning Frequency Bias of State Space Models [48.60241978021799]
State space models (SSMs) leverage linear, time-invariant (LTI) systems to learn sequences with long-range dependencies.
We find that SSMs exhibit an implicit bias toward capturing low-frequency components more effectively than high-frequency ones.
arXiv Detail & Related papers (2024-10-02T21:04:22Z) - Towards a Novel Perspective on Adversarial Examples Driven by Frequency [7.846634028066389]
We propose a black-box adversarial attack algorithm based on combining different frequency bands.
Experiments conducted on multiple datasets and models demonstrate that combining low-frequency bands and high-frequency components of low-frequency bands can significantly enhance attack efficiency.
arXiv Detail & Related papers (2024-04-16T00:58:46Z) - Frequency-Aware Deepfake Detection: Improving Generalizability through
Frequency Space Learning [81.98675881423131]
This research addresses the challenge of developing a universal deepfake detector that can effectively identify unseen deepfake images.
Existing frequency-based paradigms have relied on frequency-level artifacts introduced during the up-sampling in GAN pipelines to detect forgeries.
We introduce a novel frequency-aware approach called FreqNet, centered around frequency domain learning, specifically designed to enhance the generalizability of deepfake detectors.
arXiv Detail & Related papers (2024-03-12T01:28:00Z) - Towards Building More Robust Models with Frequency Bias [8.510441741759758]
This paper presents a plug-and-play module that adaptively reconfigures the low- and high-frequency components of intermediate feature representations.
Empirical studies show that our proposed module can be easily incorporated into any adversarial training framework.
arXiv Detail & Related papers (2023-07-19T05:46:56Z) - Spatial-Frequency Discriminability for Revealing Adversarial Perturbations [53.279716307171604]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition for natural and adversarial data.
We propose a discriminative detector relying on a spatial-frequency Krawtchouk decomposition.
arXiv Detail & Related papers (2023-05-18T10:18:59Z) - FreGAN: Exploiting Frequency Components for Training GANs under Limited
Data [3.5459430566117893]
Training GANs under limited data often leads to discriminator overfitting and memorization issues.
This paper proposes FreGAN, which raises the model's frequency awareness and draws more attention to producing high-frequency signals.
In addition to exploiting both real and generated images' frequency information, we also involve the frequency signals of real images as a self-supervised constraint.
arXiv Detail & Related papers (2022-10-11T14:02:52Z) - How Does Frequency Bias Affect the Robustness of Neural Image
Classifiers against Common Corruption and Adversarial Perturbations? [27.865987936475797]
Recent studies have shown that data augmentation can result in model over-relying on features in the low-frequency domain.
We propose Jacobian frequency regularization for models' Jacobians to have a larger ratio of low-frequency components.
Our approach elucidates a more direct connection between the frequency bias and robustness of deep learning models.
arXiv Detail & Related papers (2022-05-09T20:09:31Z) - A Frequency Perspective of Adversarial Robustness [72.48178241090149]
We present a frequency-based understanding of adversarial examples, supported by theoretical and empirical findings.
Our analysis shows that adversarial examples are neither in high-frequency nor in low-frequency components, but are simply dataset dependent.
We propose a frequency-based explanation for the commonly observed accuracy vs. robustness trade-off.
arXiv Detail & Related papers (2021-10-26T19:12:34Z) - WaveTransform: Crafting Adversarial Examples via Input Decomposition [69.01794414018603]
We introduce WaveTransform', that creates adversarial noise corresponding to low-frequency and high-frequency subbands, separately (or in combination)
Experiments show that the proposed attack is effective against the defense algorithm and is also transferable across CNNs.
arXiv Detail & Related papers (2020-10-29T17:16:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.