ANCER: Anisotropic Certification via Sample-wise Volume Maximization
- URL: http://arxiv.org/abs/2107.04570v1
- Date: Fri, 9 Jul 2021 17:42:38 GMT
- Title: ANCER: Anisotropic Certification via Sample-wise Volume Maximization
- Authors: Francisco Eiras, Motasem Alfarra, M. Pawan Kumar, Philip H. S. Torr,
Puneet K. Dokania, Bernard Ghanem, Adel Bibi
- Abstract summary: We introduce ANCER, a framework for obtaining anisotropic certificates for a given test set sample via volume.
Results demonstrate that ANCER introduces accuracy on both CIFAR-10 and ImageNet at multiple radii, while certifying substantially larger regions in terms of volume.
- Score: 134.7866967491167
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Randomized smoothing has recently emerged as an effective tool that enables
certification of deep neural network classifiers at scale. All prior art on
randomized smoothing has focused on isotropic $\ell_p$ certification, which has
the advantage of yielding certificates that can be easily compared among
isotropic methods via $\ell_p$-norm radius. However, isotropic certification
limits the region that can be certified around an input to worst-case
adversaries, \ie it cannot reason about other "close", potentially large,
constant prediction safe regions. To alleviate this issue, (i) we theoretically
extend the isotropic randomized smoothing $\ell_1$ and $\ell_2$ certificates to
their generalized anisotropic counterparts following a simplified analysis.
Moreover, (ii) we propose evaluation metrics allowing for the comparison of
general certificates - a certificate is superior to another if it certifies a
superset region - with the quantification of each certificate through the
volume of the certified region. We introduce ANCER, a practical framework for
obtaining anisotropic certificates for a given test set sample via volume
maximization. Our empirical results demonstrate that ANCER achieves
state-of-the-art $\ell_1$ and $\ell_2$ certified accuracy on both CIFAR-10 and
ImageNet at multiple radii, while certifying substantially larger regions in
terms of volume, thus highlighting the benefits of moving away from isotropic
analysis. Code used in our experiments is available in
https://github.com/MotasemAlfarra/ANCER.
Related papers
- Adaptive Hierarchical Certification for Segmentation using Randomized Smoothing [87.48628403354351]
certification for machine learning is proving that no adversarial sample can evade a model within a range under certain conditions.
Common certification methods for segmentation use a flat set of fine-grained classes, leading to high abstain rates due to model uncertainty.
We propose a novel, more practical setting, which certifies pixels within a multi-level hierarchy, and adaptively relaxes the certification to a coarser level for unstable components.
arXiv Detail & Related papers (2024-02-13T11:59:43Z) - Projected Randomized Smoothing for Certified Adversarial Robustness [9.771011198361865]
Randomized smoothing is the current state-of-the-art method for producing provably robust classifiers.
Recent research has generalized provable robustness to different norm balls as well as anisotropic regions.
We show that our method improves on the state-of-the-art by many orders of magnitude.
arXiv Detail & Related papers (2023-09-25T01:12:55Z) - Double Bubble, Toil and Trouble: Enhancing Certified Robustness through
Transitivity [27.04033198073254]
In response to subtle adversarial examples flipping classifications of neural network models, recent research has promoted certified robustness as a solution.
We show how today's "optimal" certificates can be improved by exploiting both the transitivity of certifications, and the geometry of the input space.
Our technique shows even more promising results, with a uniform $4$ percentage point increase in the achieved certified radius.
arXiv Detail & Related papers (2022-10-12T10:42:21Z) - Towards Evading the Limits of Randomized Smoothing: A Theoretical
Analysis [74.85187027051879]
We show that it is possible to approximate the optimal certificate with arbitrary precision, by probing the decision boundary with several noise distributions.
This result fosters further research on classifier-specific certification and demonstrates that randomized smoothing is still worth investigating.
arXiv Detail & Related papers (2022-06-03T17:48:54Z) - Smooth-Reduce: Leveraging Patches for Improved Certified Robustness [100.28947222215463]
We propose a training-free, modified smoothing approach, Smooth-Reduce.
Our algorithm classifies overlapping patches extracted from an input image, and aggregates the predicted logits to certify a larger radius around the input.
We provide theoretical guarantees for such certificates, and empirically show significant improvements over other randomized smoothing methods.
arXiv Detail & Related papers (2022-05-12T15:26:20Z) - Tight Second-Order Certificates for Randomized Smoothing [106.06908242424481]
We show that there also exists a universal curvature-like bound for Gaussian random smoothing.
In addition to proving the correctness of this novel certificate, we show that SoS certificates are realizable and therefore tight.
arXiv Detail & Related papers (2020-10-20T18:03:45Z) - Higher-Order Certification for Randomized Smoothing [78.00394805536317]
We propose a framework to improve the certified safety region for smoothed classifiers.
We provide a method to calculate the certified safety region using $0th$-order and $1st$-order information.
We also provide a framework that generalizes the calculation for certification using higher-order information.
arXiv Detail & Related papers (2020-10-13T19:35:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.