Input-Specific Robustness Certification for Randomized Smoothing
- URL: http://arxiv.org/abs/2112.12084v1
- Date: Tue, 21 Dec 2021 12:16:03 GMT
- Title: Input-Specific Robustness Certification for Randomized Smoothing
- Authors: Ruoxin Chen, Jie Li, Junchi Yan, Ping Li, Bin Sheng
- Abstract summary: We propose Input-Specific Sampling (ISS) acceleration to achieve the cost-effectiveness for robustness certification.
ISS can speed up the certification by more than three times at a limited cost of 0.05 certified radius.
- Score: 76.76115360719837
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although randomized smoothing has demonstrated high certified robustness and
superior scalability to other certified defenses, the high computational
overhead of the robustness certification bottlenecks the practical
applicability, as it depends heavily on the large sample approximation for
estimating the confidence interval. In existing works, the sample size for the
confidence interval is universally set and agnostic to the input for
prediction. This Input-Agnostic Sampling (IAS) scheme may yield a poor Average
Certified Radius (ACR)-runtime trade-off which calls for improvement. In this
paper, we propose Input-Specific Sampling (ISS) acceleration to achieve the
cost-effectiveness for robustness certification, in an adaptive way of reducing
the sampling size based on the input characteristic. Furthermore, our method
universally controls the certified radius decline from the ISS sample size
reduction. The empirical results on CIFAR-10 and ImageNet show that ISS can
speed up the certification by more than three times at a limited cost of 0.05
certified radius. Meanwhile, ISS surpasses IAS on the average certified radius
across the extensive hyperparameter settings. Specifically, ISS achieves
ACR=0.958 on ImageNet ($\sigma=1.0$) in 250 minutes, compared to ACR=0.917 by
IAS under the same condition. We release our code in
\url{https://github.com/roy-ch/Input-Specific-Certification}.
Related papers
- Certified Robustness for Deep Equilibrium Models via Serialized Random Smoothing [12.513566361816684]
Implicit models such as Deep Equilibrium Models (DEQs) have emerged as promising alternative approaches for building deep neural networks.
Existing certified defenses for DEQs employing deterministic certification methods can not certify on large-scale datasets.
We provide the first randomized smoothing certified defense for DEQs to solve these limitations.
arXiv Detail & Related papers (2024-11-01T06:14:11Z) - Average Certified Radius is a Poor Metric for Randomized Smoothing [7.960121888896864]
We show that the average certified radius (ACR) is an exceptionally poor metric for evaluating robustness guarantees provided by randomized smoothing.
We show that ACR is much more sensitive to improvements on easy samples than on hard ones.
arXiv Detail & Related papers (2024-10-09T13:58:41Z) - DC-Solver: Improving Predictor-Corrector Diffusion Sampler via Dynamic Compensation [68.55191764622525]
Diffusion models (DPMs) have shown remarkable performance in visual synthesis but are computationally expensive due to the need for multiple evaluations during the sampling.
Recent predictor synthesis-or diffusion samplers have significantly reduced the required number of evaluations, but inherently suffer from a misalignment issue.
We introduce a new fast DPM sampler called DC-CPRr, which leverages dynamic compensation to mitigate the misalignment.
arXiv Detail & Related papers (2024-09-05T17:59:46Z) - Estimating the Robustness Radius for Randomized Smoothing with 100$\times$ Sample Efficiency [6.199300239433395]
This work demonstrates that reducing the number of samples by one or two orders of magnitude can still enable the computation of a slightly smaller robustness radius.
We provide the mathematical foundation for explaining the phenomenon while experimentally showing promising results on the standard CIFAR-10 and ImageNet datasets.
arXiv Detail & Related papers (2024-04-26T12:43:19Z) - Adaptive Hierarchical Certification for Segmentation using Randomized Smoothing [87.48628403354351]
certification for machine learning is proving that no adversarial sample can evade a model within a range under certain conditions.
Common certification methods for segmentation use a flat set of fine-grained classes, leading to high abstain rates due to model uncertainty.
We propose a novel, more practical setting, which certifies pixels within a multi-level hierarchy, and adaptively relaxes the certification to a coarser level for unstable components.
arXiv Detail & Related papers (2024-02-13T11:59:43Z) - Incremental Randomized Smoothing Certification [5.971462597321995]
We show how to reuse the certification guarantees for the original smoothed model to certify an approximated model with very few samples.
We experimentally demonstrate the effectiveness of our approach, showing up to 3x certification speedup over the certification that applies randomized smoothing of the approximate model from scratch.
arXiv Detail & Related papers (2023-05-31T03:11:15Z) - Certified Error Control of Candidate Set Pruning for Two-Stage Relevance
Ranking [57.42241521034744]
We propose the concept of certified error control of candidate set pruning for relevance ranking.
Our method successfully prunes the first-stage retrieved candidate sets to improve the second-stage reranking speed.
arXiv Detail & Related papers (2022-05-19T16:00:13Z) - Certified Defense via Latent Space Randomized Smoothing with Orthogonal
Encoders [13.723000245697866]
We investigate the possibility of performing randomized smoothing and establishing the robust certification in the latent space of a network.
We use modules, whose Lipschitz property is known for free by design, to propagate the certified radius estimated in the latent space back to the input space.
Experiments on CIFAR10 and ImageNet show that our method achieves competitive robustness certified but with a significant improvement of efficiency during the test phase.
arXiv Detail & Related papers (2021-08-01T16:48:43Z) - Data Dependent Randomized Smoothing [127.34833801660233]
We show that our data dependent framework can be seamlessly incorporated into 3 randomized smoothing approaches.
We get 9% and 6% improvement over the certified accuracy of the strongest baseline for a radius of 0.5 on CIFAR10 and ImageNet.
arXiv Detail & Related papers (2020-12-08T10:53:11Z) - Second-Order Provable Defenses against Adversarial Attacks [63.34032156196848]
We show that if the eigenvalues of the network are bounded, we can compute a certificate in the $l$ norm efficiently using convex optimization.
We achieve certified accuracy of 5.78%, and 44.96%, and 43.19% on 2,59% and 4BP-based methods respectively.
arXiv Detail & Related papers (2020-06-01T05:55:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.