Improved, Deterministic Smoothing for L1 Certified Robustness
- URL: http://arxiv.org/abs/2103.10834v1
- Date: Wed, 17 Mar 2021 21:49:53 GMT
- Title: Improved, Deterministic Smoothing for L1 Certified Robustness
- Authors: Alexander Levine, Soheil Feizi
- Abstract summary: We propose a non-additive and deterministic smoothing method, Deterministic Smoothing with Splitting Noise (DSSN)
In contrast to uniform additive smoothing, the SSN certification does not require the random noise components used to be independent.
This is the first work to provide deterministic "randomized smoothing" for a norm-based adversarial threat model.
- Score: 119.86676998327864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Randomized smoothing is a general technique for computing sample-dependent
robustness guarantees against adversarial attacks for deep classifiers. Prior
works on randomized smoothing against L_1 adversarial attacks use additive
smoothing noise and provide probabilistic robustness guarantees. In this work,
we propose a non-additive and deterministic smoothing method, Deterministic
Smoothing with Splitting Noise (DSSN). To develop DSSN, we first develop SSN, a
randomized method which involves generating each noisy smoothing sample by
first randomly splitting the input space and then returning a representation of
the center of the subdivision occupied by the input sample. In contrast to
uniform additive smoothing, the SSN certification does not require the random
noise components used to be independent. Thus, smoothing can be done
effectively in just one dimension and can therefore be efficiently derandomized
for quantized data (e.g., images). To the best of our knowledge, this is the
first work to provide deterministic "randomized smoothing" for a norm-based
adversarial threat model while allowing for an arbitrary classifier (i.e., a
deep model) to be used as a base classifier and without requiring an
exponential number of smoothing samples. On CIFAR-10 and ImageNet datasets, we
provide substantially larger L_1 robustness certificates compared to prior
works, establishing a new state-of-the-art. The determinism of our method also
leads to significantly faster certificate computation.
Related papers
- Multi-scale Diffusion Denoised Smoothing [79.95360025953931]
randomized smoothing has become one of a few tangible approaches that offers adversarial robustness to models at scale.
We present scalable methods to address the current trade-off between certified robustness and accuracy in denoised smoothing.
Our experiments show that the proposed multi-scale smoothing scheme combined with diffusion fine-tuning enables strong certified robustness available with high noise level.
arXiv Detail & Related papers (2023-10-25T17:11:21Z) - Hierarchical Randomized Smoothing [61.593806731814794]
Randomized smoothing is a powerful framework for making models provably robust against small changes to their inputs.
We introduce hierarchical randomized smoothing: We partially smooth objects by adding random noise only on a randomly selected subset of their entities.
We experimentally demonstrate the importance of hierarchical smoothing in image and node classification, where it yields superior robustness-accuracy trade-offs.
arXiv Detail & Related papers (2023-10-24T22:24:44Z) - The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing [85.85160896547698]
Real-life applications of deep neural networks are hindered by their unsteady predictions when faced with noisy inputs and adversarial attacks.
We show how to design an efficient classifier with a certified radius by relying on noise injection into the inputs.
Our novel certification procedure allows us to use pre-trained models with randomized smoothing, effectively improving the current certification radius in a zero-shot manner.
arXiv Detail & Related papers (2023-09-28T22:41:47Z) - Certified Adversarial Robustness via Anisotropic Randomized Smoothing [10.0631242687419]
We propose the first anisotropic randomized smoothing method which ensures provable robustness guarantee based on pixel-wise noise distributions.
Also, we design a novel CNN-based noise generator to efficiently fine-tune the pixel-wise noise distributions for all the pixels in each input.
arXiv Detail & Related papers (2022-07-12T05:50:07Z) - Double Sampling Randomized Smoothing [19.85592163703077]
We propose a Double Sampling Randomized Smoothing framework.
It exploits the sampled probability from an additional smoothing distribution to tighten the robustness certification of the previous smoothed classifier.
We show that DSRS certifies larger robust radii than existing datasets consistently under different settings.
arXiv Detail & Related papers (2022-06-16T04:34:28Z) - SmoothMix: Training Confidence-calibrated Smoothed Classifiers for
Certified Robustness [61.212486108346695]
We propose a training scheme, coined SmoothMix, to control the robustness of smoothed classifiers via self-mixup.
The proposed procedure effectively identifies over-confident, near off-class samples as a cause of limited robustness.
Our experimental results demonstrate that the proposed method can significantly improve the certified $ell$-robustness of smoothed classifiers.
arXiv Detail & Related papers (2021-11-17T18:20:59Z) - Black-Box Certification with Randomized Smoothing: A Functional
Optimization Based Framework [60.981406394238434]
We propose a general framework of adversarial certification with non-Gaussian noise and for more general types of attacks.
Our proposed methods achieve better certification results than previous works and provide a new perspective on randomized smoothing certification.
arXiv Detail & Related papers (2020-02-21T07:52:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.