Robust Learning with Frequency Domain Regularization
- URL: http://arxiv.org/abs/2007.03244v1
- Date: Tue, 7 Jul 2020 07:29:20 GMT
- Title: Robust Learning with Frequency Domain Regularization
- Authors: Weiyu Guo, Yidong Ouyang
- Abstract summary: We introduce a new regularization method by constraining the frequency spectra of the filter of the model.
We demonstrate the effectiveness of our regularization by (1) defensing to adversarial perturbations; (2) reducing the generalization gap in different architecture; and (3) improving the generalization ability in transfer learning scenario without fine-tune.
- Score: 1.370633147306388
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolution neural networks have achieved remarkable performance in many
tasks of computing vision. However, CNN tends to bias to low frequency
components. They prioritize capturing low frequency patterns which lead them
fail when suffering from application scenario transformation. While adversarial
example implies the model is very sensitive to high frequency perturbations. In
this paper, we introduce a new regularization method by constraining the
frequency spectra of the filter of the model. Different from band-limit
training, our method considers the valid frequency range probably entangles in
different layers rather than continuous and trains the valid frequency range
end-to-end by backpropagation. We demonstrate the effectiveness of our
regularization by (1) defensing to adversarial perturbations; (2) reducing the
generalization gap in different architecture; (3) improving the generalization
ability in transfer learning scenario without fine-tune.
Related papers
- What do neural networks learn in image classification? A frequency
shortcut perspective [3.9858496473361402]
This study empirically investigates the learning dynamics of frequency shortcuts in neural networks (NNs)
We show that NNs tend to find simple solutions for classification, and what they learn first during training depends on the most distinctive frequency characteristics.
We propose a metric to measure class-wise frequency characteristics and a method to identify frequency shortcuts.
arXiv Detail & Related papers (2023-07-19T08:34:25Z) - Frequency Domain Adversarial Training for Robust Volumetric Medical
Segmentation [111.61781272232646]
It is imperative to ensure the robustness of deep learning models in critical applications such as, healthcare.
We present a 3D frequency domain adversarial attack for volumetric medical image segmentation models.
arXiv Detail & Related papers (2023-07-14T10:50:43Z) - A Scalable Walsh-Hadamard Regularizer to Overcome the Low-degree
Spectral Bias of Neural Networks [79.28094304325116]
Despite the capacity of neural nets to learn arbitrary functions, models trained through gradient descent often exhibit a bias towards simpler'' functions.
We show how this spectral bias towards low-degree frequencies can in fact hurt the neural network's generalization on real-world datasets.
We propose a new scalable functional regularization scheme that aids the neural network to learn higher degree frequencies.
arXiv Detail & Related papers (2023-05-16T20:06:01Z) - Incremental Spatial and Spectral Learning of Neural Operators for
Solving Large-Scale PDEs [86.35471039808023]
We introduce the Incremental Fourier Neural Operator (iFNO), which progressively increases the number of frequency modes used by the model.
We show that iFNO reduces total training time while maintaining or improving generalization performance across various datasets.
Our method demonstrates a 10% lower testing error, using 20% fewer frequency modes compared to the existing Fourier Neural Operator, while also achieving a 30% faster training.
arXiv Detail & Related papers (2022-11-28T09:57:15Z) - Frequency Dropout: Feature-Level Regularization via Randomized Filtering [24.53978165468098]
Deep convolutional neural networks are susceptible to picking up spurious correlations from the training signal.
We propose a training strategy, Frequency Dropout, to prevent convolutional neural networks from learning frequency-specific imaging features.
Our results suggest that the proposed approach does not only improve predictive accuracy but also improves robustness against domain shift.
arXiv Detail & Related papers (2022-09-20T16:42:21Z) - Adaptive Frequency Learning in Two-branch Face Forgery Detection [66.91715092251258]
We propose Adaptively learn Frequency information in the two-branch Detection framework, dubbed AFD.
We liberate our network from the fixed frequency transforms, and achieve better performance with our data- and task-dependent transform layers.
arXiv Detail & Related papers (2022-03-27T14:25:52Z) - Spectral Bias in Practice: The Role of Function Frequency in
Generalization [10.7218588164913]
We propose methodologies for measuring spectral bias in modern image classification networks.
We find that networks that generalize well strike a balance between having enough complexity to fit the data while being simple enough to avoid overfitting.
Our work enables measuring and ultimately controlling the spectral behavior of neural networks used for image classification.
arXiv Detail & Related papers (2021-10-06T00:16:10Z) - Distribution Mismatch Correction for Improved Robustness in Deep Neural
Networks [86.42889611784855]
normalization methods increase the vulnerability with respect to noise and input corruptions.
We propose an unsupervised non-parametric distribution correction method that adapts the activation distribution of each layer.
In our experiments, we empirically show that the proposed method effectively reduces the impact of intense image corruptions.
arXiv Detail & Related papers (2021-10-05T11:36:25Z) - Dense Pruning of Pointwise Convolutions in the Frequency Domain [10.58456555092086]
We propose a technique which wraps each pointwise layer in a discrete cosine transform (DCT) which is truncated to selectively prune coefficients above a given threshold.
Unlike weight pruning techniques which rely on sparse operators, our contiguous frequency band pruning results in fully dense computation.
We apply our technique to MobileNetV2 and in the process reduce computation time by 22% and incur 1% accuracy degradation.
arXiv Detail & Related papers (2021-09-16T04:02:45Z) - Frequency Gating: Improved Convolutional Neural Networks for Speech
Enhancement in the Time-Frequency Domain [37.722450363816144]
We introduce a method, which we call Frequency Gating, to compute multiplicative weights for the kernels of the CNN.
Experiments with an autoencoder neural network with skip connections show that both local and frequency-wise gating outperform the baseline.
A loss function based on the extended short-time objective intelligibility score (ESTOI) is introduced, which we show to outperform the standard mean squared error (MSE) loss function.
arXiv Detail & Related papers (2020-11-08T22:04:00Z) - WaveTransform: Crafting Adversarial Examples via Input Decomposition [69.01794414018603]
We introduce WaveTransform', that creates adversarial noise corresponding to low-frequency and high-frequency subbands, separately (or in combination)
Experiments show that the proposed attack is effective against the defense algorithm and is also transferable across CNNs.
arXiv Detail & Related papers (2020-10-29T17:16:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.