Certified Adversarial Robustness via Anisotropic Randomized Smoothing
- URL: http://arxiv.org/abs/2207.05327v1
- Date: Tue, 12 Jul 2022 05:50:07 GMT
- Title: Certified Adversarial Robustness via Anisotropic Randomized Smoothing
- Authors: Hanbin Hong, and Yuan Hong
- Abstract summary: We propose the first anisotropic randomized smoothing method which ensures provable robustness guarantee based on pixel-wise noise distributions.
Also, we design a novel CNN-based noise generator to efficiently fine-tune the pixel-wise noise distributions for all the pixels in each input.
- Score: 10.0631242687419
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Randomized smoothing has achieved great success for certified robustness
against adversarial perturbations. Given any arbitrary classifier, randomized
smoothing can guarantee the classifier's prediction over the perturbed input
with provable robustness bound by injecting noise into the classifier. However,
all of the existing methods rely on fixed i.i.d. probability distribution to
generate noise for all dimensions of the data (e.g., all the pixels in an
image), which ignores the heterogeneity of inputs and data dimensions. Thus,
existing randomized smoothing methods cannot provide optimal protection for all
the inputs. To address this limitation, we propose the first anisotropic
randomized smoothing method which ensures provable robustness guarantee based
on pixel-wise noise distributions. Also, we design a novel CNN-based noise
generator to efficiently fine-tune the pixel-wise noise distributions for all
the pixels in each input. Experimental results demonstrate that our method
significantly outperforms the state-of-the-art randomized smoothing methods.
Related papers
- Variational Randomized Smoothing for Sample-Wise Adversarial Robustness [12.455543308060196]
This paper proposes a new variational framework that uses a per-sample noise level suitable for each input by introducing a noise level selector.
Our experimental results demonstrate enhancement of empirical robustness against adversarial attacks.
arXiv Detail & Related papers (2024-07-16T15:25:13Z) - Multi-scale Diffusion Denoised Smoothing [79.95360025953931]
randomized smoothing has become one of a few tangible approaches that offers adversarial robustness to models at scale.
We present scalable methods to address the current trade-off between certified robustness and accuracy in denoised smoothing.
Our experiments show that the proposed multi-scale smoothing scheme combined with diffusion fine-tuning enables strong certified robustness available with high noise level.
arXiv Detail & Related papers (2023-10-25T17:11:21Z) - Hierarchical Randomized Smoothing [61.593806731814794]
Randomized smoothing is a powerful framework for making models provably robust against small changes to their inputs.
We introduce hierarchical randomized smoothing: We partially smooth objects by adding random noise only on a randomly selected subset of their entities.
We experimentally demonstrate the importance of hierarchical smoothing in image and node classification, where it yields superior robustness-accuracy trade-offs.
arXiv Detail & Related papers (2023-10-24T22:24:44Z) - Understanding Noise-Augmented Training for Randomized Smoothing [14.061680807550722]
Randomized smoothing is a technique for providing provable robustness guarantees against adversarial attacks.
We show that, without making stronger distributional assumptions, no benefit can be expected from predictors trained with noise-augmentation.
Our analysis has direct implications to the practical deployment of randomized smoothing.
arXiv Detail & Related papers (2023-05-08T14:46:34Z) - Confidence-aware Training of Smoothed Classifiers for Certified
Robustness [75.95332266383417]
We use "accuracy under Gaussian noise" as an easy-to-compute proxy of adversarial robustness for an input.
Our experiments show that the proposed method consistently exhibits improved certified robustness upon state-of-the-art training methods.
arXiv Detail & Related papers (2022-12-18T03:57:12Z) - SmoothMix: Training Confidence-calibrated Smoothed Classifiers for
Certified Robustness [61.212486108346695]
We propose a training scheme, coined SmoothMix, to control the robustness of smoothed classifiers via self-mixup.
The proposed procedure effectively identifies over-confident, near off-class samples as a cause of limited robustness.
Our experimental results demonstrate that the proposed method can significantly improve the certified $ell$-robustness of smoothed classifiers.
arXiv Detail & Related papers (2021-11-17T18:20:59Z) - Improved, Deterministic Smoothing for L1 Certified Robustness [119.86676998327864]
We propose a non-additive and deterministic smoothing method, Deterministic Smoothing with Splitting Noise (DSSN)
In contrast to uniform additive smoothing, the SSN certification does not require the random noise components used to be independent.
This is the first work to provide deterministic "randomized smoothing" for a norm-based adversarial threat model.
arXiv Detail & Related papers (2021-03-17T21:49:53Z) - Extensions and limitations of randomized smoothing for robustness
guarantees [13.37805637358556]
We study how the choice of divergence between smoothing measures affects the final robustness guarantee.
We develop a method to certify robustness against any $ell_p$ ($pinmathbbN_>0$) minimized adversarial perturbation.
arXiv Detail & Related papers (2020-06-07T17:22:32Z) - Black-Box Certification with Randomized Smoothing: A Functional
Optimization Based Framework [60.981406394238434]
We propose a general framework of adversarial certification with non-Gaussian noise and for more general types of attacks.
Our proposed methods achieve better certification results than previous works and provide a new perspective on randomized smoothing certification.
arXiv Detail & Related papers (2020-02-21T07:52:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.