Double Sampling Randomized Smoothing
- URL: http://arxiv.org/abs/2206.07912v1
- Date: Thu, 16 Jun 2022 04:34:28 GMT
- Title: Double Sampling Randomized Smoothing
- Authors: Linyi Li and Jiawei Zhang and Tao Xie and Bo Li
- Abstract summary: We propose a Double Sampling Randomized Smoothing framework.
It exploits the sampled probability from an additional smoothing distribution to tighten the robustness certification of the previous smoothed classifier.
We show that DSRS certifies larger robust radii than existing datasets consistently under different settings.
- Score: 19.85592163703077
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks (NNs) are known to be vulnerable against adversarial
perturbations, and thus there is a line of work aiming to provide robustness
certification for NNs, such as randomized smoothing, which samples smoothing
noises from a certain distribution to certify the robustness for a smoothed
classifier. However, as previous work shows, the certified robust radius in
randomized smoothing suffers from scaling to large datasets ("curse of
dimensionality"). To overcome this hurdle, we propose a Double Sampling
Randomized Smoothing (DSRS) framework, which exploits the sampled probability
from an additional smoothing distribution to tighten the robustness
certification of the previous smoothed classifier. Theoretically, under mild
assumptions, we prove that DSRS can certify $\Theta(\sqrt d)$ robust radius
under $\ell_2$ norm where $d$ is the input dimension, which implies that DSRS
may be able to break the curse of dimensionality of randomized smoothing. We
instantiate DSRS for a generalized family of Gaussian smoothing and propose an
efficient and sound computing method based on customized dual optimization
considering sampling error. Extensive experiments on MNIST, CIFAR-10, and
ImageNet verify our theory and show that DSRS certifies larger robust radii
than existing baselines consistently under different settings. Code is
available at https://github.com/llylly/DSRS.
Related papers
- Effects of Exponential Gaussian Distribution on (Double Sampling) Randomized Smoothing [21.618349628349115]
We study the effect of two families of distributions, named Exponential Standard Gaussian (ESG) and Exponential General Gaussian (EGG) distributions, on Randomized Smoothing and Double Randomized Smoothing (DSRS)
Our experiments on real-world datasets confirm our theoretical analysis of the ESG, that they provide almost the same certification under different exponents $eta$ for both RS and DSRS.
Compared to the primitive DSRS, the increase in certified accuracy provided by EGG is prominent, up to 6.4% on ImageNet.
arXiv Detail & Related papers (2024-06-04T13:41:00Z) - Estimating the Robustness Radius for Randomized Smoothing with 100$\times$ Sample Efficiency [6.199300239433395]
This work demonstrates that reducing the number of samples by one or two orders of magnitude can still enable the computation of a slightly smaller robustness radius.
We provide the mathematical foundation for explaining the phenomenon while experimentally showing promising results on the standard CIFAR-10 and ImageNet datasets.
arXiv Detail & Related papers (2024-04-26T12:43:19Z) - Mitigating the Curse of Dimensionality for Certified Robustness via Dual Randomized Smoothing [48.219725131912355]
This paper explores the feasibility of providing $ell$ certified robustness for high-dimensional input through the utilization of dual smoothing.
The proposed Dual Smoothing (DRS) down-samples the input image into two sub-images and smooths the two sub-images in lower dimensions.
Extensive experiments demonstrate the generalizability and effectiveness of DRS, which exhibits a notable capability to integrate with established methodologies.
arXiv Detail & Related papers (2024-04-15T08:54:33Z) - The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing [85.85160896547698]
Real-life applications of deep neural networks are hindered by their unsteady predictions when faced with noisy inputs and adversarial attacks.
We show how to design an efficient classifier with a certified radius by relying on noise injection into the inputs.
Our novel certification procedure allows us to use pre-trained models with randomized smoothing, effectively improving the current certification radius in a zero-shot manner.
arXiv Detail & Related papers (2023-09-28T22:41:47Z) - Normalized/Clipped SGD with Perturbation for Differentially Private
Non-Convex Optimization [94.06564567766475]
DP-SGD and DP-NSGD mitigate the risk of large models memorizing sensitive training data.
We show that these two algorithms achieve similar best accuracy while DP-NSGD is comparatively easier to tune than DP-SGD.
arXiv Detail & Related papers (2022-06-27T03:45:02Z) - Improved, Deterministic Smoothing for L1 Certified Robustness [119.86676998327864]
We propose a non-additive and deterministic smoothing method, Deterministic Smoothing with Splitting Noise (DSSN)
In contrast to uniform additive smoothing, the SSN certification does not require the random noise components used to be independent.
This is the first work to provide deterministic "randomized smoothing" for a norm-based adversarial threat model.
arXiv Detail & Related papers (2021-03-17T21:49:53Z) - Insta-RS: Instance-wise Randomized Smoothing for Improved Robustness and
Accuracy [9.50143683501477]
Insta-RS is a multiple-start search algorithm that assigns customized Gaussian variances to test examples.
Insta-RS Train is a novel two-stage training algorithm that adaptively adjusts and customizes the noise level of each training example.
We show that our method significantly enhances the average certified radius (ACR) as well as the clean data accuracy.
arXiv Detail & Related papers (2021-03-07T19:46:07Z) - Consistency Regularization for Certified Robustness of Smoothed
Classifiers [89.72878906950208]
A recent technique of randomized smoothing has shown that the worst-case $ell$-robustness can be transformed into the average-case robustness.
We found that the trade-off between accuracy and certified robustness of smoothed classifiers can be greatly controlled by simply regularizing the prediction consistency over noise.
arXiv Detail & Related papers (2020-06-07T06:57:43Z) - Black-Box Certification with Randomized Smoothing: A Functional
Optimization Based Framework [60.981406394238434]
We propose a general framework of adversarial certification with non-Gaussian noise and for more general types of attacks.
Our proposed methods achieve better certification results than previous works and provide a new perspective on randomized smoothing certification.
arXiv Detail & Related papers (2020-02-21T07:52:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.