NoisyMix: Boosting Robustness by Combining Data Augmentations, Stability
Training, and Noise Injections
- URL: http://arxiv.org/abs/2202.01263v1
- Date: Wed, 2 Feb 2022 19:53:35 GMT
- Title: NoisyMix: Boosting Robustness by Combining Data Augmentations, Stability
Training, and Noise Injections
- Authors: N. Benjamin Erichson, Soon Hoe Lim, Francisco Utrera, Winnie Xu, Ziang
Cao, Michael W. Mahoney
- Abstract summary: We introduce NoisyMix, a training scheme that combines data augmentations with stability training and noise injections to improve both model robustness and in-domain accuracy.
We demonstrate the benefits of NoisyMix on a range of benchmark datasets, including ImageNet-C, ImageNet-R, and ImageNet-P.
- Score: 46.745755900939216
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For many real-world applications, obtaining stable and robust statistical
performance is more important than simply achieving state-of-the-art predictive
test accuracy, and thus robustness of neural networks is an increasingly
important topic. Relatedly, data augmentation schemes have been shown to
improve robustness with respect to input perturbations and domain shifts.
Motivated by this, we introduce NoisyMix, a training scheme that combines data
augmentations with stability training and noise injections to improve both
model robustness and in-domain accuracy. This combination promotes models that
are consistently more robust and that provide well-calibrated estimates of
class membership probabilities. We demonstrate the benefits of NoisyMix on a
range of benchmark datasets, including ImageNet-C, ImageNet-R, and ImageNet-P.
Moreover, we provide theory to understand implicit regularization and
robustness of NoisyMix.
Related papers
- Adversarial Robustification via Text-to-Image Diffusion Models [56.37291240867549]
Adrial robustness has been conventionally believed as a challenging property to encode for neural networks.
We develop a scalable and model-agnostic solution to achieve adversarial robustness without using any data.
arXiv Detail & Related papers (2024-07-26T10:49:14Z) - Data-Driven Lipschitz Continuity: A Cost-Effective Approach to Improve Adversarial Robustness [47.9744734181236]
We explore the concept of Lipschitz continuity to certify the robustness of deep neural networks (DNNs) against adversarial attacks.
We propose a novel algorithm that remaps the input domain into a constrained range, reducing the Lipschitz constant and potentially enhancing robustness.
Our method achieves the best robust accuracy for CIFAR10, CIFAR100, and ImageNet datasets on the RobustBench leaderboard.
arXiv Detail & Related papers (2024-06-28T03:10:36Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - IPMix: Label-Preserving Data Augmentation Method for Training Robust
Classifiers [4.002089584222719]
We propose IPMix, a simple data augmentation approach to improve robustness without hurting clean accuracy.
IPMix integrates three levels of data augmentation into a coherent and label-preserving technique to increase the diversity of training data.
Experiments demonstrate that IPMix outperforms state-of-the-art corruption robustness on CIFAR-C and ImageNet-C.
arXiv Detail & Related papers (2023-10-07T11:45:33Z) - MAPS: A Noise-Robust Progressive Learning Approach for Source-Free
Domain Adaptive Keypoint Detection [76.97324120775475]
Cross-domain keypoint detection methods always require accessing the source data during adaptation.
This paper considers source-free domain adaptive keypoint detection, where only the well-trained source model is provided to the target domain.
arXiv Detail & Related papers (2023-02-09T12:06:08Z) - AugRmixAT: A Data Processing and Training Method for Improving Multiple
Robustness and Generalization Performance [10.245536402327096]
Much previous work has been proposed to improve specific robustness of deep neural network models.
In this paper, we propose a new data processing and training method, called AugRmixAT, which can simultaneously improve the generalization ability and multiple robustness of neural network models.
arXiv Detail & Related papers (2022-07-21T04:02:24Z) - Certified Adversarial Defenses Meet Out-of-Distribution Corruptions:
Benchmarking Robustness and Simple Baselines [65.0803400763215]
This work critically examines how adversarial robustness guarantees change when state-of-the-art certifiably robust models encounter out-of-distribution data.
We propose a novel data augmentation scheme, FourierMix, that produces augmentations to improve the spectral coverage of the training data.
We find that FourierMix augmentations help eliminate the spectral bias of certifiably robust models enabling them to achieve significantly better robustness guarantees on a range of OOD benchmarks.
arXiv Detail & Related papers (2021-12-01T17:11:22Z) - Noisy Recurrent Neural Networks [45.94390701863504]
We study recurrent neural networks (RNNs) trained by injecting noise into hidden states as discretizations of differential equations driven by input data.
We find that, under reasonable assumptions, this implicit regularization promotes flatter minima; it biases towards models with more stable dynamics; and, in classification tasks, it favors models with larger classification margin.
Our theory is supported by empirical results which demonstrate improved robustness with respect to various input perturbations, while maintaining state-of-the-art performance.
arXiv Detail & Related papers (2021-02-09T15:20:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.