Certified Adversarial Robustness Within Multiple Perturbation Bounds
- URL: http://arxiv.org/abs/2304.10446v1
- Date: Thu, 20 Apr 2023 16:42:44 GMT
- Title: Certified Adversarial Robustness Within Multiple Perturbation Bounds
- Authors: Soumalya Nandi, Sravanti Addepalli, Harsh Rangwani and R. Venkatesh
Babu
- Abstract summary: Randomized smoothing (RS) is a well known certified defense against adversarial attacks.
In this work, we aim to improve the certified adversarial robustness against multiple perturbation bounds simultaneously.
- Score: 38.3813286696956
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Randomized smoothing (RS) is a well known certified defense against
adversarial attacks, which creates a smoothed classifier by predicting the most
likely class under random noise perturbations of inputs during inference. While
initial work focused on robustness to $\ell_2$ norm perturbations using noise
sampled from a Gaussian distribution, subsequent works have shown that
different noise distributions can result in robustness to other $\ell_p$ norm
bounds as well. In general, a specific noise distribution is optimal for
defending against a given $\ell_p$ norm based attack. In this work, we aim to
improve the certified adversarial robustness against multiple perturbation
bounds simultaneously. Towards this, we firstly present a novel
\textit{certification scheme}, that effectively combines the certificates
obtained using different noise distributions to obtain optimal results against
multiple perturbation bounds. We further propose a novel \textit{training noise
distribution} along with a \textit{regularized training scheme} to improve the
certification within both $\ell_1$ and $\ell_2$ perturbation norms
simultaneously. Contrary to prior works, we compare the certified robustness of
different training algorithms across the same natural (clean) accuracy, rather
than across fixed noise levels used for training and certification. We also
empirically invalidate the argument that training and certifying the classifier
with the same amount of noise gives the best results. The proposed approach
achieves improvements on the ACR (Average Certified Radius) metric across both
$\ell_1$ and $\ell_2$ perturbation bounds.
Related papers
- The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing [85.85160896547698]
Real-life applications of deep neural networks are hindered by their unsteady predictions when faced with noisy inputs and adversarial attacks.
We show how to design an efficient classifier with a certified radius by relying on noise injection into the inputs.
Our novel certification procedure allows us to use pre-trained models with randomized smoothing, effectively improving the current certification radius in a zero-shot manner.
arXiv Detail & Related papers (2023-09-28T22:41:47Z) - Confidence-aware Training of Smoothed Classifiers for Certified
Robustness [75.95332266383417]
We use "accuracy under Gaussian noise" as an easy-to-compute proxy of adversarial robustness for an input.
Our experiments show that the proposed method consistently exhibits improved certified robustness upon state-of-the-art training methods.
arXiv Detail & Related papers (2022-12-18T03:57:12Z) - Improved, Deterministic Smoothing for L1 Certified Robustness [119.86676998327864]
We propose a non-additive and deterministic smoothing method, Deterministic Smoothing with Splitting Noise (DSSN)
In contrast to uniform additive smoothing, the SSN certification does not require the random noise components used to be independent.
This is the first work to provide deterministic "randomized smoothing" for a norm-based adversarial threat model.
arXiv Detail & Related papers (2021-03-17T21:49:53Z) - Adversarially Robust Classifier with Covariate Shift Adaptation [25.39995678746662]
Existing adversarially trained models typically perform inference on test examples independently from each other.
We show that simple adaptive batch normalization (BN) technique can significantly improve the robustness of these models for any random perturbations.
We further demonstrate that adaptive BN technique significantly improves robustness against common corruptions, while often enhancing performance against adversarial attacks.
arXiv Detail & Related papers (2021-02-09T19:51:56Z) - Consistency Regularization for Certified Robustness of Smoothed
Classifiers [89.72878906950208]
A recent technique of randomized smoothing has shown that the worst-case $ell$-robustness can be transformed into the average-case robustness.
We found that the trade-off between accuracy and certified robustness of smoothed classifiers can be greatly controlled by simply regularizing the prediction consistency over noise.
arXiv Detail & Related papers (2020-06-07T06:57:43Z) - Black-Box Certification with Randomized Smoothing: A Functional
Optimization Based Framework [60.981406394238434]
We propose a general framework of adversarial certification with non-Gaussian noise and for more general types of attacks.
Our proposed methods achieve better certification results than previous works and provide a new perspective on randomized smoothing certification.
arXiv Detail & Related papers (2020-02-21T07:52:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.