Diverse Gaussian Noise Consistency Regularization for Robustness and
Uncertainty Calibration
- URL: http://arxiv.org/abs/2104.01231v6
- Date: Mon, 29 May 2023 15:06:24 GMT
- Title: Diverse Gaussian Noise Consistency Regularization for Robustness and
Uncertainty Calibration
- Authors: Theodoros Tsiligkaridis, Athanasios Tsiligkaridis
- Abstract summary: Deep neural networks achieve high prediction accuracy when the train and test distributions coincide.
Various types of corruptions occur which deviate from this setup and cause severe performance degradations.
We propose a diverse Gaussian noise consistency regularization method for improving robustness of image classifiers under a variety of corruptions.
- Score: 7.310043452300738
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks achieve high prediction accuracy when the train and test
distributions coincide. In practice though, various types of corruptions occur
which deviate from this setup and cause severe performance degradations. Few
methods have been proposed to address generalization in the presence of
unforeseen domain shifts. In particular, digital noise corruptions arise
commonly in practice during the image acquisition stage and present a
significant challenge for current methods. In this paper, we propose a diverse
Gaussian noise consistency regularization method for improving robustness of
image classifiers under a variety of corruptions while still maintaining high
clean accuracy. We derive bounds to motivate and understand the behavior of our
Gaussian noise consistency regularization using a local loss landscape
analysis. Our approach improves robustness against unforeseen noise corruptions
by 4.2-18.4% over adversarial training and other strong diverse data
augmentation baselines across several benchmarks. Furthermore, it improves
robustness and uncertainty calibration by 3.7% and 5.5%, respectively, against
all common corruptions (weather, digital, blur, noise) when combined with
state-of-the-art diverse data augmentations.
Related papers
- Stable Neighbor Denoising for Source-free Domain Adaptive Segmentation [91.83820250747935]
Pseudo-label noise is mainly contained in unstable samples in which predictions of most pixels undergo significant variations during self-training.
We introduce the Stable Neighbor Denoising (SND) approach, which effectively discovers highly correlated stable and unstable samples.
SND consistently outperforms state-of-the-art methods in various SFUDA semantic segmentation settings.
arXiv Detail & Related papers (2024-06-10T21:44:52Z) - Heteroscedastic Uncertainty Estimation Framework for Unsupervised Registration [32.081258147692395]
We propose a framework for heteroscedastic image uncertainty estimation.
It can adaptively reduce the influence of regions with high uncertainty during unsupervised registration.
Our method consistently outperforms baselines and produces sensible uncertainty estimates.
arXiv Detail & Related papers (2023-12-01T01:03:06Z) - Exploiting Frequency Spectrum of Adversarial Images for General
Robustness [3.480626767752489]
Adversarial training with an emphasis on phase components significantly improves model performance on clean, adversarial, and common corruption accuracies.
We propose a frequency-based data augmentation method, Adversarial Amplitude Swap, that swaps the amplitude spectrum between clean and adversarial images.
These images act as substitutes for adversarial images and can be implemented in various adversarial training setups.
arXiv Detail & Related papers (2023-05-15T08:36:32Z) - Confidence-aware Training of Smoothed Classifiers for Certified
Robustness [75.95332266383417]
We use "accuracy under Gaussian noise" as an easy-to-compute proxy of adversarial robustness for an input.
Our experiments show that the proposed method consistently exhibits improved certified robustness upon state-of-the-art training methods.
arXiv Detail & Related papers (2022-12-18T03:57:12Z) - Soft Diffusion: Score Matching for General Corruptions [84.26037497404195]
We propose a new objective called Soft Score Matching that provably learns the score function for any linear corruption process.
We show that our objective learns the gradient of the likelihood under suitable regularity conditions for the family of corruption processes.
Our method achieves state-of-the-art FID score $1.85$ on CelebA-64, outperforming all previous linear diffusion models.
arXiv Detail & Related papers (2022-09-12T17:45:03Z) - Guided Diffusion Model for Adversarial Purification [103.4596751105955]
Adversarial attacks disturb deep neural networks (DNNs) in various algorithms and frameworks.
We propose a novel purification approach, referred to as guided diffusion model for purification (GDMP)
On our comprehensive experiments across various datasets, the proposed GDMP is shown to reduce the perturbations raised by adversarial attacks to a shallow range.
arXiv Detail & Related papers (2022-05-30T10:11:15Z) - Improving robustness against common corruptions with frequency biased
models [112.65717928060195]
unseen image corruptions can cause a surprisingly large drop in performance.
Image corruption types have different characteristics in the frequency spectrum and would benefit from a targeted type of data augmentation.
We propose a new regularization scheme that minimizes the total variation (TV) of convolution feature-maps to increase high-frequency robustness.
arXiv Detail & Related papers (2021-03-30T10:44:50Z) - Deep Gaussian Denoiser Epistemic Uncertainty and Decoupled
Dual-Attention Fusion [11.085432358616671]
We focus on pushing the performance limits of state-of-the-art methods on Gaussian denoising.
We propose a model-agnostic approach for reducing epistemic uncertainty while using only a single pretrained network.
Our results significantly improve over the state-of-the-art baselines and across varying noise levels.
arXiv Detail & Related papers (2021-01-12T17:38:32Z) - Consistency Regularization for Certified Robustness of Smoothed
Classifiers [89.72878906950208]
A recent technique of randomized smoothing has shown that the worst-case $ell$-robustness can be transformed into the average-case robustness.
We found that the trade-off between accuracy and certified robustness of smoothed classifiers can be greatly controlled by simply regularizing the prediction consistency over noise.
arXiv Detail & Related papers (2020-06-07T06:57:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.