Double Bubble, Toil and Trouble: Enhancing Certified Robustness through
Transitivity
- URL: http://arxiv.org/abs/2210.06077v1
- Date: Wed, 12 Oct 2022 10:42:21 GMT
- Title: Double Bubble, Toil and Trouble: Enhancing Certified Robustness through
Transitivity
- Authors: Andrew C. Cullen, Paul Montague, Shijie Liu, Sarah M. Erfani, Benjamin
I.P. Rubinstein
- Abstract summary: In response to subtle adversarial examples flipping classifications of neural network models, recent research has promoted certified robustness as a solution.
We show how today's "optimal" certificates can be improved by exploiting both the transitivity of certifications, and the geometry of the input space.
Our technique shows even more promising results, with a uniform $4$ percentage point increase in the achieved certified radius.
- Score: 27.04033198073254
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In response to subtle adversarial examples flipping classifications of neural
network models, recent research has promoted certified robustness as a
solution. There, invariance of predictions to all norm-bounded attacks is
achieved through randomised smoothing of network inputs. Today's
state-of-the-art certifications make optimal use of the class output scores at
the input instance under test: no better radius of certification (under the
$L_2$ norm) is possible given only these score. However, it is an open question
as to whether such lower bounds can be improved using local information around
the instance under test. In this work, we demonstrate how today's "optimal"
certificates can be improved by exploiting both the transitivity of
certifications, and the geometry of the input space, giving rise to what we
term Geometrically-Informed Certified Robustness. By considering the smallest
distance to points on the boundary of a set of certifications this approach
improves certifications for more than $80\%$ of Tiny-Imagenet instances,
yielding an on average $5 \%$ increase in the associated certification. When
incorporating training time processes that enhance the certified radius, our
technique shows even more promising results, with a uniform $4$ percentage
point increase in the achieved certified radius.
Related papers
- Adaptive Hierarchical Certification for Segmentation using Randomized Smoothing [87.48628403354351]
certification for machine learning is proving that no adversarial sample can evade a model within a range under certain conditions.
Common certification methods for segmentation use a flat set of fine-grained classes, leading to high abstain rates due to model uncertainty.
We propose a novel, more practical setting, which certifies pixels within a multi-level hierarchy, and adaptively relaxes the certification to a coarser level for unstable components.
arXiv Detail & Related papers (2024-02-13T11:59:43Z) - Incremental Randomized Smoothing Certification [5.971462597321995]
We show how to reuse the certification guarantees for the original smoothed model to certify an approximated model with very few samples.
We experimentally demonstrate the effectiveness of our approach, showing up to 3x certification speedup over the certification that applies randomized smoothing of the approximate model from scratch.
arXiv Detail & Related papers (2023-05-31T03:11:15Z) - Towards Evading the Limits of Randomized Smoothing: A Theoretical
Analysis [74.85187027051879]
We show that it is possible to approximate the optimal certificate with arbitrary precision, by probing the decision boundary with several noise distributions.
This result fosters further research on classifier-specific certification and demonstrates that randomized smoothing is still worth investigating.
arXiv Detail & Related papers (2022-06-03T17:48:54Z) - Smooth-Reduce: Leveraging Patches for Improved Certified Robustness [100.28947222215463]
We propose a training-free, modified smoothing approach, Smooth-Reduce.
Our algorithm classifies overlapping patches extracted from an input image, and aggregates the predicted logits to certify a larger radius around the input.
We provide theoretical guarantees for such certificates, and empirically show significant improvements over other randomized smoothing methods.
arXiv Detail & Related papers (2022-05-12T15:26:20Z) - Input-Specific Robustness Certification for Randomized Smoothing [76.76115360719837]
We propose Input-Specific Sampling (ISS) acceleration to achieve the cost-effectiveness for robustness certification.
ISS can speed up the certification by more than three times at a limited cost of 0.05 certified radius.
arXiv Detail & Related papers (2021-12-21T12:16:03Z) - ANCER: Anisotropic Certification via Sample-wise Volume Maximization [134.7866967491167]
We introduce ANCER, a framework for obtaining anisotropic certificates for a given test set sample via volume.
Results demonstrate that ANCER introduces accuracy on both CIFAR-10 and ImageNet at multiple radii, while certifying substantially larger regions in terms of volume.
arXiv Detail & Related papers (2021-07-09T17:42:38Z) - Certified Distributional Robustness on Smoothed Classifiers [27.006844966157317]
We propose the worst-case adversarial loss over input distributions as a robustness certificate.
By exploiting duality and the smoothness property, we provide an easy-to-compute upper bound as a surrogate for the certificate.
arXiv Detail & Related papers (2020-10-21T13:22:25Z) - Second-Order Provable Defenses against Adversarial Attacks [63.34032156196848]
We show that if the eigenvalues of the network are bounded, we can compute a certificate in the $l$ norm efficiently using convex optimization.
We achieve certified accuracy of 5.78%, and 44.96%, and 43.19% on 2,59% and 4BP-based methods respectively.
arXiv Detail & Related papers (2020-06-01T05:55:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.