Extensions and limitations of randomized smoothing for robustness
guarantees
- URL: http://arxiv.org/abs/2006.04208v1
- Date: Sun, 7 Jun 2020 17:22:32 GMT
- Title: Extensions and limitations of randomized smoothing for robustness
guarantees
- Authors: Jamie Hayes
- Abstract summary: We study how the choice of divergence between smoothing measures affects the final robustness guarantee.
We develop a method to certify robustness against any $ell_p$ ($pinmathbbN_>0$) minimized adversarial perturbation.
- Score: 13.37805637358556
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Randomized smoothing, a method to certify a classifier's decision on an input
is invariant under adversarial noise, offers attractive advantages over other
certification methods. It operates in a black-box and so certification is not
constrained by the size of the classifier's architecture. Here, we extend the
work of Li et al. \cite{li2018second}, studying how the choice of divergence
between smoothing measures affects the final robustness guarantee, and how the
choice of smoothing measure itself can lead to guarantees in differing threat
models. To this end, we develop a method to certify robustness against any
$\ell_p$ ($p\in\mathbb{N}_{>0}$) minimized adversarial perturbation. We then
demonstrate a negative result, that randomized smoothing suffers from the curse
of dimensionality; as $p$ increases, the effective radius around an input one
can certify vanishes.
Related papers
- The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing [85.85160896547698]
Real-life applications of deep neural networks are hindered by their unsteady predictions when faced with noisy inputs and adversarial attacks.
We show how to design an efficient classifier with a certified radius by relying on noise injection into the inputs.
Our novel certification procedure allows us to use pre-trained models with randomized smoothing, effectively improving the current certification radius in a zero-shot manner.
arXiv Detail & Related papers (2023-09-28T22:41:47Z) - Confidence-aware Training of Smoothed Classifiers for Certified
Robustness [75.95332266383417]
We use "accuracy under Gaussian noise" as an easy-to-compute proxy of adversarial robustness for an input.
Our experiments show that the proposed method consistently exhibits improved certified robustness upon state-of-the-art training methods.
arXiv Detail & Related papers (2022-12-18T03:57:12Z) - Certified Adversarial Robustness via Anisotropic Randomized Smoothing [10.0631242687419]
We propose the first anisotropic randomized smoothing method which ensures provable robustness guarantee based on pixel-wise noise distributions.
Also, we design a novel CNN-based noise generator to efficiently fine-tune the pixel-wise noise distributions for all the pixels in each input.
arXiv Detail & Related papers (2022-07-12T05:50:07Z) - Getting a-Round Guarantees: Floating-Point Attacks on Certified Robustness [19.380453459873298]
Adversarial examples pose a security risk as they can alter decisions of a machine learning classifier through slight input perturbations.
We show that these guarantees can be invalidated due to limitations of floating-point representation that cause rounding errors.
We show that the attack can be carried out against linear classifiers that have exact certifiable guarantees and against neural networks that have conservative certifications.
arXiv Detail & Related papers (2022-05-20T13:07:36Z) - SmoothMix: Training Confidence-calibrated Smoothed Classifiers for
Certified Robustness [61.212486108346695]
We propose a training scheme, coined SmoothMix, to control the robustness of smoothed classifiers via self-mixup.
The proposed procedure effectively identifies over-confident, near off-class samples as a cause of limited robustness.
Our experimental results demonstrate that the proposed method can significantly improve the certified $ell$-robustness of smoothed classifiers.
arXiv Detail & Related papers (2021-11-17T18:20:59Z) - Improved, Deterministic Smoothing for L1 Certified Robustness [119.86676998327864]
We propose a non-additive and deterministic smoothing method, Deterministic Smoothing with Splitting Noise (DSSN)
In contrast to uniform additive smoothing, the SSN certification does not require the random noise components used to be independent.
This is the first work to provide deterministic "randomized smoothing" for a norm-based adversarial threat model.
arXiv Detail & Related papers (2021-03-17T21:49:53Z) - Deterministic Certification to Adversarial Attacks via Bernstein
Polynomial Approximation [5.392822954974537]
Randomized smoothing has established state-of-the-art provable robustness against $ell$ norm adversarial attacks with high probability.
We come up with a question, "Is it possible to construct a smoothed classifier without randomization while maintaining natural accuracy?"
Our method provides a deterministic algorithm for decision boundary smoothing.
We also introduce a distinctive approach of norm-independent certified robustness via numerical solutions of nonlinear systems of equations.
arXiv Detail & Related papers (2020-11-28T08:27:42Z) - Tight Second-Order Certificates for Randomized Smoothing [106.06908242424481]
We show that there also exists a universal curvature-like bound for Gaussian random smoothing.
In addition to proving the correctness of this novel certificate, we show that SoS certificates are realizable and therefore tight.
arXiv Detail & Related papers (2020-10-20T18:03:45Z) - Black-Box Certification with Randomized Smoothing: A Functional
Optimization Based Framework [60.981406394238434]
We propose a general framework of adversarial certification with non-Gaussian noise and for more general types of attacks.
Our proposed methods achieve better certification results than previous works and provide a new perspective on randomized smoothing certification.
arXiv Detail & Related papers (2020-02-21T07:52:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.