Wavelet Regularization Benefits Adversarial Training
- URL: http://arxiv.org/abs/2206.03727v1
- Date: Wed, 8 Jun 2022 08:00:30 GMT
- Title: Wavelet Regularization Benefits Adversarial Training
- Authors: Jun Yan, Huilin Yin, Xiaoyang Deng, Ziming Zhao, Wancheng Ge, Hao
Zhang, Gerhard Rigoll
- Abstract summary: We propose a wavelet regularization method based on the Haar wavelet decomposition which is named Wavelet Average Pooling.
On the datasets of CIFAR-10 and CIFAR-100, our proposed Adversarial Wavelet Training method realizes considerable robustness under different types of attacks.
- Score: 11.157873822750561
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial training methods are state-of-the-art (SOTA) empirical defense
methods against adversarial examples. Many regularization methods have been
proven to be effective with the combination of adversarial training.
Nevertheless, such regularization methods are implemented in the time domain.
Since adversarial vulnerability can be regarded as a high-frequency phenomenon,
it is essential to regulate the adversarially-trained neural network models in
the frequency domain. Faced with these challenges, we make a theoretical
analysis on the regularization property of wavelets which can enhance
adversarial training. We propose a wavelet regularization method based on the
Haar wavelet decomposition which is named Wavelet Average Pooling. This wavelet
regularization module is integrated into the wide residual neural network so
that a new WideWaveletResNet model is formed. On the datasets of CIFAR-10 and
CIFAR-100, our proposed Adversarial Wavelet Training method realizes
considerable robustness under different types of attacks. It verifies the
assumption that our wavelet regularization method can enhance adversarial
robustness especially in the deep wide neural networks. The visualization
experiments of the Frequency Principle (F-Principle) and interpretability are
implemented to show the effectiveness of our method. A detailed comparison
based on different wavelet base functions is presented. The code is available
at the repository:
\url{https://github.com/momo1986/AdversarialWaveletTraining}.
Related papers
- Adaptive LPD Radar Waveform Design with Generative Deep Learning [6.21540494241516]
We propose a novel, learning-based method for adaptively generating low probability of detection radar waveforms.
Our method can generate LPD waveforms that reduce detectability by up to 90% while simultaneously offering improved ambiguity function (sensing) characteristics.
arXiv Detail & Related papers (2024-03-18T21:07:57Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Observation-Guided Diffusion Probabilistic Models [41.749374023639156]
We propose a novel diffusion-based image generation method called the observation-guided diffusion probabilistic model (OGDM)
Our approach reestablishes the training objective by integrating the guidance of the observation process with the Markov chain.
We demonstrate the effectiveness of our training algorithm using diverse inference techniques on strong diffusion model baselines.
arXiv Detail & Related papers (2023-10-06T06:29:06Z) - Histogram Layer Time Delay Neural Networks for Passive Sonar
Classification [58.720142291102135]
A novel method combines a time delay neural network and histogram layer to incorporate statistical contexts for improved feature learning and underwater acoustic target classification.
The proposed method outperforms the baseline model, demonstrating the utility in incorporating statistical contexts for passive sonar target recognition.
arXiv Detail & Related papers (2023-07-25T19:47:26Z) - Frequency Domain Adversarial Training for Robust Volumetric Medical
Segmentation [111.61781272232646]
It is imperative to ensure the robustness of deep learning models in critical applications such as, healthcare.
We present a 3D frequency domain adversarial attack for volumetric medical image segmentation models.
arXiv Detail & Related papers (2023-07-14T10:50:43Z) - Phase-shifted Adversarial Training [8.89749787668458]
We analyze the behavior of adversarial training through the lens of response frequency.
PhaseAT significantly improves the convergence for high-frequency information.
This results in improved adversarial robustness by enabling the model to have smoothed predictions near each data.
arXiv Detail & Related papers (2023-01-12T02:25:22Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - WaveTransform: Crafting Adversarial Examples via Input Decomposition [69.01794414018603]
We introduce WaveTransform', that creates adversarial noise corresponding to low-frequency and high-frequency subbands, separately (or in combination)
Experiments show that the proposed attack is effective against the defense algorithm and is also transferable across CNNs.
arXiv Detail & Related papers (2020-10-29T17:16:59Z) - Training Generative Adversarial Networks by Solving Ordinary
Differential Equations [54.23691425062034]
We study the continuous-time dynamics induced by GAN training.
From this perspective, we hypothesise that instabilities in training GANs arise from the integration error.
We experimentally verify that well-known ODE solvers (such as Runge-Kutta) can stabilise training.
arXiv Detail & Related papers (2020-10-28T15:23:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.