Intriguing Properties of Input-dependent Randomized Smoothing
- URL: http://arxiv.org/abs/2110.05365v3
- Date: Fri, 8 Mar 2024 18:10:06 GMT
- Title: Intriguing Properties of Input-dependent Randomized Smoothing
- Authors: Peter S\'uken\'ik, Aleksei Kuvshinov, Stephan G\"unnemann
- Abstract summary: We present one concrete design of the smoothing variance function and test it on CIFAR10 and MNIST.
We show that input-dependent smoothing suffers from the curse of dimensionality, forcing the variance function to have low semi-elasticity.
We provide a theoretical and practical framework that enables the usage of input-dependent smoothing even in the presence of the curse of dimensionality.
- Score: 6.0887051533533265
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Randomized smoothing is currently considered the state-of-the-art method to
obtain certifiably robust classifiers. Despite its remarkable performance, the
method is associated with various serious problems such as "certified accuracy
waterfalls", certification vs.\ accuracy trade-off, or even fairness issues.
Input-dependent smoothing approaches have been proposed with intention of
overcoming these flaws. However, we demonstrate that these methods lack formal
guarantees and so the resulting certificates are not justified. We show that in
general, the input-dependent smoothing suffers from the curse of
dimensionality, forcing the variance function to have low semi-elasticity. On
the other hand, we provide a theoretical and practical framework that enables
the usage of input-dependent smoothing even in the presence of the curse of
dimensionality, under strict restrictions. We present one concrete design of
the smoothing variance function and test it on CIFAR10 and MNIST. Our design
mitigates some of the problems of classical smoothing and is formally
underlined, yet further improvement of the design is still necessary.
Related papers
- Stabilizing Quantization-Aware Training by Implicit-Regularization on Hessian Matrix [0.7261171488281837]
We find that the sharp landscape of loss, which leads to a dramatic performance drop, is an essential factor that causes instability.
We propose Feature-Perturbed Quantization (FPQ) to generalize and employ the feature distillation method to the quantized model.
arXiv Detail & Related papers (2025-03-14T07:56:20Z) - Regulating Model Reliance on Non-Robust Features by Smoothing Input Marginal Density [93.32594873253534]
Trustworthy machine learning requires meticulous regulation of model reliance on non-robust features.
We propose a framework to delineate and regulate such features by attributing model predictions to the input.
arXiv Detail & Related papers (2024-07-05T09:16:56Z) - Confidence-aware Training of Smoothed Classifiers for Certified
Robustness [75.95332266383417]
We use "accuracy under Gaussian noise" as an easy-to-compute proxy of adversarial robustness for an input.
Our experiments show that the proposed method consistently exhibits improved certified robustness upon state-of-the-art training methods.
arXiv Detail & Related papers (2022-12-18T03:57:12Z) - Robustness and Accuracy Could Be Reconcilable by (Proper) Definition [109.62614226793833]
The trade-off between robustness and accuracy has been widely studied in the adversarial literature.
We find that it may stem from the improperly defined robust error, which imposes an inductive bias of local invariance.
By definition, SCORE facilitates the reconciliation between robustness and accuracy, while still handling the worst-case uncertainty.
arXiv Detail & Related papers (2022-02-21T10:36:09Z) - SmoothMix: Training Confidence-calibrated Smoothed Classifiers for
Certified Robustness [61.212486108346695]
We propose a training scheme, coined SmoothMix, to control the robustness of smoothed classifiers via self-mixup.
The proposed procedure effectively identifies over-confident, near off-class samples as a cause of limited robustness.
Our experimental results demonstrate that the proposed method can significantly improve the certified $ell$-robustness of smoothed classifiers.
arXiv Detail & Related papers (2021-11-17T18:20:59Z) - Tight Second-Order Certificates for Randomized Smoothing [106.06908242424481]
We show that there also exists a universal curvature-like bound for Gaussian random smoothing.
In addition to proving the correctness of this novel certificate, we show that SoS certificates are realizable and therefore tight.
arXiv Detail & Related papers (2020-10-20T18:03:45Z) - Certifying Neural Network Robustness to Random Input Noise from Samples [14.191310794366075]
Methods to certify the robustness of neural networks in the presence of input uncertainty are vital in safety-critical settings.
We propose a novel robustness certification method that upper bounds the probability of misclassification when the input noise follows an arbitrary probability distribution.
arXiv Detail & Related papers (2020-10-15T05:27:21Z) - Extensions and limitations of randomized smoothing for robustness
guarantees [13.37805637358556]
We study how the choice of divergence between smoothing measures affects the final robustness guarantee.
We develop a method to certify robustness against any $ell_p$ ($pinmathbbN_>0$) minimized adversarial perturbation.
arXiv Detail & Related papers (2020-06-07T17:22:32Z) - Hidden Cost of Randomized Smoothing [72.93630656906599]
In this paper, we point out the side effects of current randomized smoothing.
Specifically, we articulate and prove two major points: 1) the decision boundaries of smoothed classifiers will shrink, resulting in disparity in class-wise accuracy; 2) applying noise augmentation in the training process does not necessarily resolve the shrinking issue due to the inconsistent learning objectives.
arXiv Detail & Related papers (2020-03-02T23:37:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.