Understanding Noise-Augmented Training for Randomized Smoothing
- URL: http://arxiv.org/abs/2305.04746v1
- Date: Mon, 8 May 2023 14:46:34 GMT
- Title: Understanding Noise-Augmented Training for Randomized Smoothing
- Authors: Ambar Pal and Jeremias Sulam
- Abstract summary: Randomized smoothing is a technique for providing provable robustness guarantees against adversarial attacks.
We show that, without making stronger distributional assumptions, no benefit can be expected from predictors trained with noise-augmentation.
Our analysis has direct implications to the practical deployment of randomized smoothing.
- Score: 14.061680807550722
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Randomized smoothing is a technique for providing provable robustness
guarantees against adversarial attacks while making minimal assumptions about a
classifier. This method relies on taking a majority vote of any base classifier
over multiple noise-perturbed inputs to obtain a smoothed classifier, and it
remains the tool of choice to certify deep and complex neural network models.
Nonetheless, non-trivial performance of such smoothed classifier crucially
depends on the base model being trained on noise-augmented data, i.e., on a
smoothed input distribution. While widely adopted in practice, it is still
unclear how this noisy training of the base classifier precisely affects the
risk of the robust smoothed classifier, leading to heuristics and tricks that
are poorly understood. In this work we analyze these trade-offs theoretically
in a binary classification setting, proving that these common observations are
not universal. We show that, without making stronger distributional
assumptions, no benefit can be expected from predictors trained with
noise-augmentation, and we further characterize distributions where such
benefit is obtained. Our analysis has direct implications to the practical
deployment of randomized smoothing, and we illustrate some of these via
experiments on CIFAR-10 and MNIST, as well as on synthetic datasets.
Related papers
- The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing [85.85160896547698]
Real-life applications of deep neural networks are hindered by their unsteady predictions when faced with noisy inputs and adversarial attacks.
We show how to design an efficient classifier with a certified radius by relying on noise injection into the inputs.
Our novel certification procedure allows us to use pre-trained models with randomized smoothing, effectively improving the current certification radius in a zero-shot manner.
arXiv Detail & Related papers (2023-09-28T22:41:47Z) - Confidence-aware Training of Smoothed Classifiers for Certified
Robustness [75.95332266383417]
We use "accuracy under Gaussian noise" as an easy-to-compute proxy of adversarial robustness for an input.
Our experiments show that the proposed method consistently exhibits improved certified robustness upon state-of-the-art training methods.
arXiv Detail & Related papers (2022-12-18T03:57:12Z) - The Optimal Noise in Noise-Contrastive Learning Is Not What You Think [80.07065346699005]
We show that deviating from this assumption can actually lead to better statistical estimators.
In particular, the optimal noise distribution is different from the data's and even from a different family.
arXiv Detail & Related papers (2022-03-02T13:59:20Z) - Benign Overfitting in Adversarially Robust Linear Classification [91.42259226639837]
"Benign overfitting", where classifiers memorize noisy training data yet still achieve a good generalization performance, has drawn great attention in the machine learning community.
We show that benign overfitting indeed occurs in adversarial training, a principled approach to defend against adversarial examples.
arXiv Detail & Related papers (2021-12-31T00:27:31Z) - Consistency Regularization for Certified Robustness of Smoothed
Classifiers [89.72878906950208]
A recent technique of randomized smoothing has shown that the worst-case $ell$-robustness can be transformed into the average-case robustness.
We found that the trade-off between accuracy and certified robustness of smoothed classifiers can be greatly controlled by simply regularizing the prediction consistency over noise.
arXiv Detail & Related papers (2020-06-07T06:57:43Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z) - Multi-class Gaussian Process Classification with Noisy Inputs [2.362412515574206]
In some situations, the amount of noise can be known before-hand.
We have evaluated the proposed methods by carrying out several experiments, involving synthetic and real data.
arXiv Detail & Related papers (2020-01-28T18:55:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.