Hidden Cost of Randomized Smoothing
- URL: http://arxiv.org/abs/2003.01249v2
- Date: Fri, 12 Mar 2021 22:03:55 GMT
- Title: Hidden Cost of Randomized Smoothing
- Authors: Jeet Mohapatra, Ching-Yun Ko, Tsui-Wei (Lily) Weng, Sijia Liu, Pin-Yu
Chen, Luca Daniel
- Abstract summary: In this paper, we point out the side effects of current randomized smoothing.
Specifically, we articulate and prove two major points: 1) the decision boundaries of smoothed classifiers will shrink, resulting in disparity in class-wise accuracy; 2) applying noise augmentation in the training process does not necessarily resolve the shrinking issue due to the inconsistent learning objectives.
- Score: 72.93630656906599
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The fragility of modern machine learning models has drawn a considerable
amount of attention from both academia and the public. While immense interests
were in either crafting adversarial attacks as a way to measure the robustness
of neural networks or devising worst-case analytical robustness verification
with guarantees, few methods could enjoy both scalability and robustness
guarantees at the same time. As an alternative to these attempts, randomized
smoothing adopts a different prediction rule that enables statistical
robustness arguments which easily scale to large networks. However, in this
paper, we point out the side effects of current randomized smoothing workflows.
Specifically, we articulate and prove two major points: 1) the decision
boundaries of smoothed classifiers will shrink, resulting in disparity in
class-wise accuracy; 2) applying noise augmentation in the training process
does not necessarily resolve the shrinking issue due to the inconsistent
learning objectives.
Related papers
- Integrating uncertainty quantification into randomized smoothing based robustness guarantees [18.572496359670797]
Deep neural networks are vulnerable to adversarial attacks which can cause hazardous incorrect predictions in safety-critical applications.
Certified robustness via randomized smoothing gives a probabilistic guarantee that the smoothed classifier's predictions will not change within an $ell$-ball around a given input.
Uncertainty-based rejection is a technique often applied in practice to defend models against adversarial attacks.
We demonstrate, that the novel framework allows for a systematic evaluation of different network architectures and uncertainty measures.
arXiv Detail & Related papers (2024-10-27T13:07:43Z) - The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing [85.85160896547698]
Real-life applications of deep neural networks are hindered by their unsteady predictions when faced with noisy inputs and adversarial attacks.
We show how to design an efficient classifier with a certified radius by relying on noise injection into the inputs.
Our novel certification procedure allows us to use pre-trained models with randomized smoothing, effectively improving the current certification radius in a zero-shot manner.
arXiv Detail & Related papers (2023-09-28T22:41:47Z) - Confidence-aware Training of Smoothed Classifiers for Certified
Robustness [75.95332266383417]
We use "accuracy under Gaussian noise" as an easy-to-compute proxy of adversarial robustness for an input.
Our experiments show that the proposed method consistently exhibits improved certified robustness upon state-of-the-art training methods.
arXiv Detail & Related papers (2022-12-18T03:57:12Z) - SmoothMix: Training Confidence-calibrated Smoothed Classifiers for
Certified Robustness [61.212486108346695]
We propose a training scheme, coined SmoothMix, to control the robustness of smoothed classifiers via self-mixup.
The proposed procedure effectively identifies over-confident, near off-class samples as a cause of limited robustness.
Our experimental results demonstrate that the proposed method can significantly improve the certified $ell$-robustness of smoothed classifiers.
arXiv Detail & Related papers (2021-11-17T18:20:59Z) - Differentially Private Adversarial Robustness Through Randomized
Perturbations [16.187650541902283]
Deep Neural Networks are provably sensitive to small perturbations on correctly classified examples and lead to erroneous predictions.
In this paper, we study adversarial robustness through randomized perturbations.
Our approach uses a novel density-based mechanism based on truncated Gumbel noise.
arXiv Detail & Related papers (2020-09-27T00:58:32Z) - Reachable Sets of Classifiers and Regression Models: (Non-)Robustness
Analysis and Robust Training [1.0878040851638]
We analyze and enhance robustness properties of both classifiers and regression models.
Specifically, we verify (non-)robustness, propose a robust training procedure, and show that our approach outperforms adversarial attacks.
Second, we provide techniques to distinguish between reliable and non-reliable predictions for unlabeled inputs, to quantify the influence of each feature on a prediction, and compute a feature ranking.
arXiv Detail & Related papers (2020-07-28T10:58:06Z) - Provable tradeoffs in adversarially robust classification [96.48180210364893]
We develop and leverage new tools, including recent breakthroughs from probability theory on robust isoperimetry.
Our results reveal fundamental tradeoffs between standard and robust accuracy that grow when data is imbalanced.
arXiv Detail & Related papers (2020-06-09T09:58:19Z) - Consistency Regularization for Certified Robustness of Smoothed
Classifiers [89.72878906950208]
A recent technique of randomized smoothing has shown that the worst-case $ell$-robustness can be transformed into the average-case robustness.
We found that the trade-off between accuracy and certified robustness of smoothed classifiers can be greatly controlled by simply regularizing the prediction consistency over noise.
arXiv Detail & Related papers (2020-06-07T06:57:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.