Black-Box Certification with Randomized Smoothing: A Functional
Optimization Based Framework
- URL: http://arxiv.org/abs/2002.09169v2
- Date: Tue, 20 Oct 2020 08:27:06 GMT
- Title: Black-Box Certification with Randomized Smoothing: A Functional
Optimization Based Framework
- Authors: Dinghuai Zhang, Mao Ye, Chengyue Gong, Zhanxing Zhu, Qiang Liu
- Abstract summary: We propose a general framework of adversarial certification with non-Gaussian noise and for more general types of attacks.
Our proposed methods achieve better certification results than previous works and provide a new perspective on randomized smoothing certification.
- Score: 60.981406394238434
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Randomized classifiers have been shown to provide a promising approach for
achieving certified robustness against adversarial attacks in deep learning.
However, most existing methods only leverage Gaussian smoothing noise and only
work for $\ell_2$ perturbation. We propose a general framework of adversarial
certification with non-Gaussian noise and for more general types of attacks,
from a unified functional optimization perspective. Our new framework allows us
to identify a key trade-off between accuracy and robustness via designing
smoothing distributions, helping to design new families of non-Gaussian
smoothing distributions that work more efficiently for different $\ell_p$
settings, including $\ell_1$, $\ell_2$ and $\ell_\infty$ attacks. Our proposed
methods achieve better certification results than previous works and provide a
new perspective on randomized smoothing certification.
Related papers
- Promoting Robustness of Randomized Smoothing: Two Cost-Effective
Approaches [28.87505826018613]
We propose two cost-effective approaches to boost robustness of randomized smoothing while preserving its clean performance.
The first approach introduces a new robust training method AdvMacer which combines adversarial training and certification for randomized smoothing.
The second approach introduces a post-processing method EsbRS which greatly improves the robustness certificate based on building model ensembles.
arXiv Detail & Related papers (2023-10-11T18:06:05Z) - The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing [85.85160896547698]
Real-life applications of deep neural networks are hindered by their unsteady predictions when faced with noisy inputs and adversarial attacks.
We show how to design an efficient classifier with a certified radius by relying on noise injection into the inputs.
Our novel certification procedure allows us to use pre-trained models with randomized smoothing, effectively improving the current certification radius in a zero-shot manner.
arXiv Detail & Related papers (2023-09-28T22:41:47Z) - Certified Adversarial Robustness Within Multiple Perturbation Bounds [38.3813286696956]
Randomized smoothing (RS) is a well known certified defense against adversarial attacks.
In this work, we aim to improve the certified adversarial robustness against multiple perturbation bounds simultaneously.
arXiv Detail & Related papers (2023-04-20T16:42:44Z) - SmoothMix: Training Confidence-calibrated Smoothed Classifiers for
Certified Robustness [61.212486108346695]
We propose a training scheme, coined SmoothMix, to control the robustness of smoothed classifiers via self-mixup.
The proposed procedure effectively identifies over-confident, near off-class samples as a cause of limited robustness.
Our experimental results demonstrate that the proposed method can significantly improve the certified $ell$-robustness of smoothed classifiers.
arXiv Detail & Related papers (2021-11-17T18:20:59Z) - Improved, Deterministic Smoothing for L1 Certified Robustness [119.86676998327864]
We propose a non-additive and deterministic smoothing method, Deterministic Smoothing with Splitting Noise (DSSN)
In contrast to uniform additive smoothing, the SSN certification does not require the random noise components used to be independent.
This is the first work to provide deterministic "randomized smoothing" for a norm-based adversarial threat model.
arXiv Detail & Related papers (2021-03-17T21:49:53Z) - Higher-Order Certification for Randomized Smoothing [78.00394805536317]
We propose a framework to improve the certified safety region for smoothed classifiers.
We provide a method to calculate the certified safety region using $0th$-order and $1st$-order information.
We also provide a framework that generalizes the calculation for certification using higher-order information.
arXiv Detail & Related papers (2020-10-13T19:35:48Z) - Extensions and limitations of randomized smoothing for robustness
guarantees [13.37805637358556]
We study how the choice of divergence between smoothing measures affects the final robustness guarantee.
We develop a method to certify robustness against any $ell_p$ ($pinmathbbN_>0$) minimized adversarial perturbation.
arXiv Detail & Related papers (2020-06-07T17:22:32Z) - Consistency Regularization for Certified Robustness of Smoothed
Classifiers [89.72878906950208]
A recent technique of randomized smoothing has shown that the worst-case $ell$-robustness can be transformed into the average-case robustness.
We found that the trade-off between accuracy and certified robustness of smoothed classifiers can be greatly controlled by simply regularizing the prediction consistency over noise.
arXiv Detail & Related papers (2020-06-07T06:57:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.