May the Noise be with you: Adversarial Training without Adversarial
Examples
- URL: http://arxiv.org/abs/2312.08877v1
- Date: Tue, 12 Dec 2023 08:22:28 GMT
- Title: May the Noise be with you: Adversarial Training without Adversarial
Examples
- Authors: Ayoub Arous, Andres F Lopez-Lopera, Nael Abu-Ghazaleh, Ihsen Alouani
- Abstract summary: We investigate the question: Can we obtain adversarially-trained models without training on adversarial?
Our proposed approach incorporates inherentity by embedding Gaussian noise within the layers of the NN model at training time.
Our work contributes adversarially trained networks using a completely different approach, with empirically similar robustness to adversarial training.
- Score: 3.4673556247932225
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we investigate the following question: Can we obtain
adversarially-trained models without training on adversarial examples? Our
intuition is that training a model with inherent stochasticity, i.e.,
optimizing the parameters by minimizing a stochastic loss function, yields a
robust expectation function that is non-stochastic. In contrast to related
methods that introduce noise at the input level, our proposed approach
incorporates inherent stochasticity by embedding Gaussian noise within the
layers of the NN model at training time. We model the propagation of noise
through the layers, introducing a closed-form stochastic loss function that
encapsulates a noise variance parameter. Additionally, we contribute a
formalized noise-aware gradient, enabling the optimization of model parameters
while accounting for stochasticity. Our experimental results confirm that the
expectation model of a stochastic architecture trained on benign distribution
is adversarially robust. Interestingly, we find that the impact of the applied
Gaussian noise's standard deviation on both robustness and baseline accuracy
closely mirrors the impact of the noise magnitude employed in adversarial
training. Our work contributes adversarially trained networks using a
completely different approach, with empirically similar robustness to
adversarial training.
Related papers
- Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - Robust Estimation of Causal Heteroscedastic Noise Models [7.568978862189266]
Student's $t$-distribution is known for its robustness in accounting for sampling variability with smaller sample sizes and extreme values without significantly altering the overall distribution shape.
Our empirical evaluations demonstrate that our estimators are more robust and achieve better overall performance across synthetic and real benchmarks.
arXiv Detail & Related papers (2023-12-15T02:26:35Z) - Stable Unlearnable Example: Enhancing the Robustness of Unlearnable
Examples via Stable Error-Minimizing Noise [31.586389548657205]
Unlearnable example is proposed to significantly degrade the generalization performance of models by adding a kind of imperceptible noise to the data.
We introduce stable error-minimizing noise (SEM), which trains the defensive noise against random perturbation instead of the time-consuming adversarial perturbation.
SEM achieves a new state-of-the-art performance on CIFAR-10, CIFAR-100, and ImageNet Subset.
arXiv Detail & Related papers (2023-11-22T01:43:57Z) - Understanding and Mitigating the Label Noise in Pre-training on
Downstream Tasks [91.15120211190519]
This paper aims to understand the nature of noise in pre-training datasets and to mitigate its impact on downstream tasks.
We propose a light-weight black-box tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise.
arXiv Detail & Related papers (2023-09-29T06:18:15Z) - Improve Noise Tolerance of Robust Loss via Noise-Awareness [60.34670515595074]
We propose a meta-learning method which is capable of adaptively learning a hyper parameter prediction function, called Noise-Aware-Robust-Loss-Adjuster (NARL-Adjuster for brevity)
Four SOTA robust loss functions are attempted to be integrated with our algorithm, and comprehensive experiments substantiate the general availability and effectiveness of the proposed method in both its noise tolerance and performance.
arXiv Detail & Related papers (2023-01-18T04:54:58Z) - Confidence-aware Training of Smoothed Classifiers for Certified
Robustness [75.95332266383417]
We use "accuracy under Gaussian noise" as an easy-to-compute proxy of adversarial robustness for an input.
Our experiments show that the proposed method consistently exhibits improved certified robustness upon state-of-the-art training methods.
arXiv Detail & Related papers (2022-12-18T03:57:12Z) - Self-Adapting Noise-Contrastive Estimation for Energy-Based Models [0.0]
Training energy-based models with noise-contrastive estimation (NCE) is theoretically feasible but practically challenging.
Previous works have explored modelling the noise distribution as a separate generative model, and then concurrently training this noise model with the EBM.
This thesis proposes a self-adapting NCE algorithm which uses static instances of the EBM along its training trajectory as the noise distribution.
arXiv Detail & Related papers (2022-11-03T15:17:43Z) - The Optimal Noise in Noise-Contrastive Learning Is Not What You Think [80.07065346699005]
We show that deviating from this assumption can actually lead to better statistical estimators.
In particular, the optimal noise distribution is different from the data's and even from a different family.
arXiv Detail & Related papers (2022-03-02T13:59:20Z) - Stochastic Perturbations of Tabular Features for Non-Deterministic
Inference with Automunge [0.0]
Injecting gaussian noise into training features is well known to have regularization properties.
This paper considers noise injections to numeric or categoric tabular features as passed to inference.
arXiv Detail & Related papers (2022-02-18T15:24:03Z) - Towards Robust Waveform-Based Acoustic Models [41.82019240477273]
We propose an approach for learning robust acoustic models in adverse environments, characterized by a significant mismatch between training and test conditions.
Our approach is an instance of vicinal risk minimization, which aims to improve risk estimates during training by replacing the delta functions that define the empirical density over the input space with an approximation of the marginal population density in the vicinity of the training samples.
arXiv Detail & Related papers (2021-10-16T18:21:34Z) - Consistency Regularization for Certified Robustness of Smoothed
Classifiers [89.72878906950208]
A recent technique of randomized smoothing has shown that the worst-case $ell$-robustness can be transformed into the average-case robustness.
We found that the trade-off between accuracy and certified robustness of smoothed classifiers can be greatly controlled by simply regularizing the prediction consistency over noise.
arXiv Detail & Related papers (2020-06-07T06:57:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.