Incremental Randomized Smoothing Certification
- URL: http://arxiv.org/abs/2305.19521v2
- Date: Thu, 11 Apr 2024 00:38:29 GMT
- Title: Incremental Randomized Smoothing Certification
- Authors: Shubham Ugare, Tarun Suresh, Debangshu Banerjee, Gagandeep Singh, Sasa Misailovic,
- Abstract summary: We show how to reuse the certification guarantees for the original smoothed model to certify an approximated model with very few samples.
We experimentally demonstrate the effectiveness of our approach, showing up to 3x certification speedup over the certification that applies randomized smoothing of the approximate model from scratch.
- Score: 5.971462597321995
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Randomized smoothing-based certification is an effective approach for obtaining robustness certificates of deep neural networks (DNNs) against adversarial attacks. This method constructs a smoothed DNN model and certifies its robustness through statistical sampling, but it is computationally expensive, especially when certifying with a large number of samples. Furthermore, when the smoothed model is modified (e.g., quantized or pruned), certification guarantees may not hold for the modified DNN, and recertifying from scratch can be prohibitively expensive. We present the first approach for incremental robustness certification for randomized smoothing, IRS. We show how to reuse the certification guarantees for the original smoothed model to certify an approximated model with very few samples. IRS significantly reduces the computational cost of certifying modified DNNs while maintaining strong robustness guarantees. We experimentally demonstrate the effectiveness of our approach, showing up to 3x certification speedup over the certification that applies randomized smoothing of the approximate model from scratch.
Related papers
- Certifying Adapters: Enabling and Enhancing the Certification of Classifier Adversarial Robustness [21.394217131341932]
We introduce a novel certifying adapters framework (CAF) that enables and enhances the certification of adversarial robustness.
CAF achieves improved certified accuracies when compared to methods based on random or denoised smoothing.
An ensemble of adapters enables a single pre-trained feature extractor to defend against a range of noise perturbation scales.
arXiv Detail & Related papers (2024-05-25T03:18:52Z) - Adaptive Hierarchical Certification for Segmentation using Randomized Smoothing [87.48628403354351]
certification for machine learning is proving that no adversarial sample can evade a model within a range under certain conditions.
Common certification methods for segmentation use a flat set of fine-grained classes, leading to high abstain rates due to model uncertainty.
We propose a novel, more practical setting, which certifies pixels within a multi-level hierarchy, and adaptively relaxes the certification to a coarser level for unstable components.
arXiv Detail & Related papers (2024-02-13T11:59:43Z) - Double Bubble, Toil and Trouble: Enhancing Certified Robustness through
Transitivity [27.04033198073254]
In response to subtle adversarial examples flipping classifications of neural network models, recent research has promoted certified robustness as a solution.
We show how today's "optimal" certificates can be improved by exploiting both the transitivity of certifications, and the geometry of the input space.
Our technique shows even more promising results, with a uniform $4$ percentage point increase in the achieved certified radius.
arXiv Detail & Related papers (2022-10-12T10:42:21Z) - Smooth-Reduce: Leveraging Patches for Improved Certified Robustness [100.28947222215463]
We propose a training-free, modified smoothing approach, Smooth-Reduce.
Our algorithm classifies overlapping patches extracted from an input image, and aggregates the predicted logits to certify a larger radius around the input.
We provide theoretical guarantees for such certificates, and empirically show significant improvements over other randomized smoothing methods.
arXiv Detail & Related papers (2022-05-12T15:26:20Z) - Input-Specific Robustness Certification for Randomized Smoothing [76.76115360719837]
We propose Input-Specific Sampling (ISS) acceleration to achieve the cost-effectiveness for robustness certification.
ISS can speed up the certification by more than three times at a limited cost of 0.05 certified radius.
arXiv Detail & Related papers (2021-12-21T12:16:03Z) - Improved, Deterministic Smoothing for L1 Certified Robustness [119.86676998327864]
We propose a non-additive and deterministic smoothing method, Deterministic Smoothing with Splitting Noise (DSSN)
In contrast to uniform additive smoothing, the SSN certification does not require the random noise components used to be independent.
This is the first work to provide deterministic "randomized smoothing" for a norm-based adversarial threat model.
arXiv Detail & Related papers (2021-03-17T21:49:53Z) - Data Dependent Randomized Smoothing [127.34833801660233]
We show that our data dependent framework can be seamlessly incorporated into 3 randomized smoothing approaches.
We get 9% and 6% improvement over the certified accuracy of the strongest baseline for a radius of 0.5 on CIFAR10 and ImageNet.
arXiv Detail & Related papers (2020-12-08T10:53:11Z) - Certified Distributional Robustness on Smoothed Classifiers [27.006844966157317]
We propose the worst-case adversarial loss over input distributions as a robustness certificate.
By exploiting duality and the smoothness property, we provide an easy-to-compute upper bound as a surrogate for the certificate.
arXiv Detail & Related papers (2020-10-21T13:22:25Z) - Enhancing Certified Robustness via Smoothed Weighted Ensembling [7.217295098686032]
We employ a Smoothed WEighted ENsembling scheme to improve the performance of randomized smoothed classifiers.
We show the ensembling generality that SWEEN can help achieve optimal certified robustness.
We also develop an adaptive prediction algorithm to reduce the prediction and certification cost of SWEEN models.
arXiv Detail & Related papers (2020-05-19T11:13:43Z) - Black-Box Certification with Randomized Smoothing: A Functional
Optimization Based Framework [60.981406394238434]
We propose a general framework of adversarial certification with non-Gaussian noise and for more general types of attacks.
Our proposed methods achieve better certification results than previous works and provide a new perspective on randomized smoothing certification.
arXiv Detail & Related papers (2020-02-21T07:52:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.