Input-Specific Robustness Certification for Randomized Smoothing
- URL: http://arxiv.org/abs/2112.12084v1
- Date: Tue, 21 Dec 2021 12:16:03 GMT
- Title: Input-Specific Robustness Certification for Randomized Smoothing
- Authors: Ruoxin Chen, Jie Li, Junchi Yan, Ping Li, Bin Sheng
- Abstract summary: We propose Input-Specific Sampling (ISS) acceleration to achieve the cost-effectiveness for robustness certification.
ISS can speed up the certification by more than three times at a limited cost of 0.05 certified radius.
- Score: 76.76115360719837
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although randomized smoothing has demonstrated high certified robustness and
superior scalability to other certified defenses, the high computational
overhead of the robustness certification bottlenecks the practical
applicability, as it depends heavily on the large sample approximation for
estimating the confidence interval. In existing works, the sample size for the
confidence interval is universally set and agnostic to the input for
prediction. This Input-Agnostic Sampling (IAS) scheme may yield a poor Average
Certified Radius (ACR)-runtime trade-off which calls for improvement. In this
paper, we propose Input-Specific Sampling (ISS) acceleration to achieve the
cost-effectiveness for robustness certification, in an adaptive way of reducing
the sampling size based on the input characteristic. Furthermore, our method
universally controls the certified radius decline from the ISS sample size
reduction. The empirical results on CIFAR-10 and ImageNet show that ISS can
speed up the certification by more than three times at a limited cost of 0.05
certified radius. Meanwhile, ISS surpasses IAS on the average certified radius
across the extensive hyperparameter settings. Specifically, ISS achieves
ACR=0.958 on ImageNet ($\sigma=1.0$) in 250 minutes, compared to ACR=0.917 by
IAS under the same condition. We release our code in
\url{https://github.com/roy-ch/Input-Specific-Certification}.
Related papers
- Estimating the Robustness Radius for Randomized Smoothing with 100$\times$ Sample Efficiency [6.199300239433395]
This work demonstrates that reducing the number of samples by one or two orders of magnitude can still enable the computation of a slightly smaller robustness radius.
We provide the mathematical foundation for explaining the phenomenon while experimentally showing promising results on the standard CIFAR-10 and ImageNet datasets.
arXiv Detail & Related papers (2024-04-26T12:43:19Z) - Adaptive Hierarchical Certification for Segmentation using Randomized Smoothing [87.48628403354351]
certification for machine learning is proving that no adversarial sample can evade a model within a range under certain conditions.
Common certification methods for segmentation use a flat set of fine-grained classes, leading to high abstain rates due to model uncertainty.
We propose a novel, more practical setting, which certifies pixels within a multi-level hierarchy, and adaptively relaxes the certification to a coarser level for unstable components.
arXiv Detail & Related papers (2024-02-13T11:59:43Z) - Incremental Randomized Smoothing Certification [5.971462597321995]
We show how to reuse the certification guarantees for the original smoothed model to certify an approximated model with very few samples.
We experimentally demonstrate the effectiveness of our approach, showing up to 3x certification speedup over the certification that applies randomized smoothing of the approximate model from scratch.
arXiv Detail & Related papers (2023-05-31T03:11:15Z) - Double Bubble, Toil and Trouble: Enhancing Certified Robustness through
Transitivity [27.04033198073254]
In response to subtle adversarial examples flipping classifications of neural network models, recent research has promoted certified robustness as a solution.
We show how today's "optimal" certificates can be improved by exploiting both the transitivity of certifications, and the geometry of the input space.
Our technique shows even more promising results, with a uniform $4$ percentage point increase in the achieved certified radius.
arXiv Detail & Related papers (2022-10-12T10:42:21Z) - Towards Evading the Limits of Randomized Smoothing: A Theoretical
Analysis [74.85187027051879]
We show that it is possible to approximate the optimal certificate with arbitrary precision, by probing the decision boundary with several noise distributions.
This result fosters further research on classifier-specific certification and demonstrates that randomized smoothing is still worth investigating.
arXiv Detail & Related papers (2022-06-03T17:48:54Z) - Certified Error Control of Candidate Set Pruning for Two-Stage Relevance
Ranking [57.42241521034744]
We propose the concept of certified error control of candidate set pruning for relevance ranking.
Our method successfully prunes the first-stage retrieved candidate sets to improve the second-stage reranking speed.
arXiv Detail & Related papers (2022-05-19T16:00:13Z) - Certified Defense via Latent Space Randomized Smoothing with Orthogonal
Encoders [13.723000245697866]
We investigate the possibility of performing randomized smoothing and establishing the robust certification in the latent space of a network.
We use modules, whose Lipschitz property is known for free by design, to propagate the certified radius estimated in the latent space back to the input space.
Experiments on CIFAR10 and ImageNet show that our method achieves competitive robustness certified but with a significant improvement of efficiency during the test phase.
arXiv Detail & Related papers (2021-08-01T16:48:43Z) - Data Dependent Randomized Smoothing [127.34833801660233]
We show that our data dependent framework can be seamlessly incorporated into 3 randomized smoothing approaches.
We get 9% and 6% improvement over the certified accuracy of the strongest baseline for a radius of 0.5 on CIFAR10 and ImageNet.
arXiv Detail & Related papers (2020-12-08T10:53:11Z) - Certifying Neural Network Robustness to Random Input Noise from Samples [14.191310794366075]
Methods to certify the robustness of neural networks in the presence of input uncertainty are vital in safety-critical settings.
We propose a novel robustness certification method that upper bounds the probability of misclassification when the input noise follows an arbitrary probability distribution.
arXiv Detail & Related papers (2020-10-15T05:27:21Z) - Second-Order Provable Defenses against Adversarial Attacks [63.34032156196848]
We show that if the eigenvalues of the network are bounded, we can compute a certificate in the $l$ norm efficiently using convex optimization.
We achieve certified accuracy of 5.78%, and 44.96%, and 43.19% on 2,59% and 4BP-based methods respectively.
arXiv Detail & Related papers (2020-06-01T05:55:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.