Towards Evading the Limits of Randomized Smoothing: A Theoretical
Analysis
- URL: http://arxiv.org/abs/2206.01715v1
- Date: Fri, 3 Jun 2022 17:48:54 GMT
- Title: Towards Evading the Limits of Randomized Smoothing: A Theoretical
Analysis
- Authors: Raphael Ettedgui, Alexandre Araujo, Rafael Pinot, Yann Chevaleyre,
Jamal Atif
- Abstract summary: We show that it is possible to approximate the optimal certificate with arbitrary precision, by probing the decision boundary with several noise distributions.
This result fosters further research on classifier-specific certification and demonstrates that randomized smoothing is still worth investigating.
- Score: 74.85187027051879
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Randomized smoothing is the dominant standard for provable defenses against
adversarial examples. Nevertheless, this method has recently been proven to
suffer from important information theoretic limitations. In this paper, we
argue that these limitations are not intrinsic, but merely a byproduct of
current certification methods. We first show that these certificates use too
little information about the classifier, and are in particular blind to the
local curvature of the decision boundary. This leads to severely sub-optimal
robustness guarantees as the dimension of the problem increases. We then show
that it is theoretically possible to bypass this issue by collecting more
information about the classifier. More precisely, we show that it is possible
to approximate the optimal certificate with arbitrary precision, by probing the
decision boundary with several noise distributions. Since this process is
executed at certification time rather than at test time, it entails no loss in
natural accuracy while enhancing the quality of the certificates. This result
fosters further research on classifier-specific certification and demonstrates
that randomized smoothing is still worth investigating. Although
classifier-specific certification may induce more computational cost, we also
provide some theoretical insight on how to mitigate it.
Related papers
- Certified Robustness for Deep Equilibrium Models via Serialized Random Smoothing [12.513566361816684]
Implicit models such as Deep Equilibrium Models (DEQs) have emerged as promising alternative approaches for building deep neural networks.
Existing certified defenses for DEQs employing deterministic certification methods can not certify on large-scale datasets.
We provide the first randomized smoothing certified defense for DEQs to solve these limitations.
arXiv Detail & Related papers (2024-11-01T06:14:11Z) - Adaptive Hierarchical Certification for Segmentation using Randomized Smoothing [87.48628403354351]
certification for machine learning is proving that no adversarial sample can evade a model within a range under certain conditions.
Common certification methods for segmentation use a flat set of fine-grained classes, leading to high abstain rates due to model uncertainty.
We propose a novel, more practical setting, which certifies pixels within a multi-level hierarchy, and adaptively relaxes the certification to a coarser level for unstable components.
arXiv Detail & Related papers (2024-02-13T11:59:43Z) - The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing [85.85160896547698]
Real-life applications of deep neural networks are hindered by their unsteady predictions when faced with noisy inputs and adversarial attacks.
We show how to design an efficient classifier with a certified radius by relying on noise injection into the inputs.
Our novel certification procedure allows us to use pre-trained models with randomized smoothing, effectively improving the current certification radius in a zero-shot manner.
arXiv Detail & Related papers (2023-09-28T22:41:47Z) - Double Bubble, Toil and Trouble: Enhancing Certified Robustness through
Transitivity [27.04033198073254]
In response to subtle adversarial examples flipping classifications of neural network models, recent research has promoted certified robustness as a solution.
We show how today's "optimal" certificates can be improved by exploiting both the transitivity of certifications, and the geometry of the input space.
Our technique shows even more promising results, with a uniform $4$ percentage point increase in the achieved certified radius.
arXiv Detail & Related papers (2022-10-12T10:42:21Z) - Smooth-Reduce: Leveraging Patches for Improved Certified Robustness [100.28947222215463]
We propose a training-free, modified smoothing approach, Smooth-Reduce.
Our algorithm classifies overlapping patches extracted from an input image, and aggregates the predicted logits to certify a larger radius around the input.
We provide theoretical guarantees for such certificates, and empirically show significant improvements over other randomized smoothing methods.
arXiv Detail & Related papers (2022-05-12T15:26:20Z) - ANCER: Anisotropic Certification via Sample-wise Volume Maximization [134.7866967491167]
We introduce ANCER, a framework for obtaining anisotropic certificates for a given test set sample via volume.
Results demonstrate that ANCER introduces accuracy on both CIFAR-10 and ImageNet at multiple radii, while certifying substantially larger regions in terms of volume.
arXiv Detail & Related papers (2021-07-09T17:42:38Z) - Certifying Neural Network Robustness to Random Input Noise from Samples [14.191310794366075]
Methods to certify the robustness of neural networks in the presence of input uncertainty are vital in safety-critical settings.
We propose a novel robustness certification method that upper bounds the probability of misclassification when the input noise follows an arbitrary probability distribution.
arXiv Detail & Related papers (2020-10-15T05:27:21Z) - Extensions and limitations of randomized smoothing for robustness
guarantees [13.37805637358556]
We study how the choice of divergence between smoothing measures affects the final robustness guarantee.
We develop a method to certify robustness against any $ell_p$ ($pinmathbbN_>0$) minimized adversarial perturbation.
arXiv Detail & Related papers (2020-06-07T17:22:32Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.