Promoting Robustness of Randomized Smoothing: Two Cost-Effective
Approaches
- URL: http://arxiv.org/abs/2310.07780v1
- Date: Wed, 11 Oct 2023 18:06:05 GMT
- Title: Promoting Robustness of Randomized Smoothing: Two Cost-Effective
Approaches
- Authors: Linbo Liu, Trong Nghia Hoang, Lam M. Nguyen, Tsui-Wei Weng
- Abstract summary: We propose two cost-effective approaches to boost robustness of randomized smoothing while preserving its clean performance.
The first approach introduces a new robust training method AdvMacer which combines adversarial training and certification for randomized smoothing.
The second approach introduces a post-processing method EsbRS which greatly improves the robustness certificate based on building model ensembles.
- Score: 28.87505826018613
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Randomized smoothing has recently attracted attentions in the field of
adversarial robustness to provide provable robustness guarantees on smoothed
neural network classifiers. However, existing works show that vanilla
randomized smoothing usually does not provide good robustness performance and
often requires (re)training techniques on the base classifier in order to boost
the robustness of the resulting smoothed classifier. In this work, we propose
two cost-effective approaches to boost the robustness of randomized smoothing
while preserving its clean performance. The first approach introduces a new
robust training method AdvMacerwhich combines adversarial training and
robustness certification maximization for randomized smoothing. We show that
AdvMacer can improve the robustness performance of randomized smoothing
classifiers compared to SOTA baselines, while being 3x faster to train than
MACER baseline. The second approach introduces a post-processing method EsbRS
which greatly improves the robustness certificate based on building model
ensembles. We explore different aspects of model ensembles that has not been
studied by prior works and propose a novel design methodology to further
improve robustness of the ensemble based on our theoretical analysis.
Related papers
- Confidence-aware Training of Smoothed Classifiers for Certified
Robustness [75.95332266383417]
We use "accuracy under Gaussian noise" as an easy-to-compute proxy of adversarial robustness for an input.
Our experiments show that the proposed method consistently exhibits improved certified robustness upon state-of-the-art training methods.
arXiv Detail & Related papers (2022-12-18T03:57:12Z) - Robust Binary Models by Pruning Randomly-initialized Networks [57.03100916030444]
We propose ways to obtain robust models against adversarial attacks from randomly-d binary networks.
We learn the structure of the robust model by pruning a randomly-d binary network.
Our method confirms the strong lottery ticket hypothesis in the presence of adversarial attacks.
arXiv Detail & Related papers (2022-02-03T00:05:08Z) - SmoothMix: Training Confidence-calibrated Smoothed Classifiers for
Certified Robustness [61.212486108346695]
We propose a training scheme, coined SmoothMix, to control the robustness of smoothed classifiers via self-mixup.
The proposed procedure effectively identifies over-confident, near off-class samples as a cause of limited robustness.
Our experimental results demonstrate that the proposed method can significantly improve the certified $ell$-robustness of smoothed classifiers.
arXiv Detail & Related papers (2021-11-17T18:20:59Z) - Improved, Deterministic Smoothing for L1 Certified Robustness [119.86676998327864]
We propose a non-additive and deterministic smoothing method, Deterministic Smoothing with Splitting Noise (DSSN)
In contrast to uniform additive smoothing, the SSN certification does not require the random noise components used to be independent.
This is the first work to provide deterministic "randomized smoothing" for a norm-based adversarial threat model.
arXiv Detail & Related papers (2021-03-17T21:49:53Z) - Hidden Cost of Randomized Smoothing [72.93630656906599]
In this paper, we point out the side effects of current randomized smoothing.
Specifically, we articulate and prove two major points: 1) the decision boundaries of smoothed classifiers will shrink, resulting in disparity in class-wise accuracy; 2) applying noise augmentation in the training process does not necessarily resolve the shrinking issue due to the inconsistent learning objectives.
arXiv Detail & Related papers (2020-03-02T23:37:42Z) - Black-Box Certification with Randomized Smoothing: A Functional
Optimization Based Framework [60.981406394238434]
We propose a general framework of adversarial certification with non-Gaussian noise and for more general types of attacks.
Our proposed methods achieve better certification results than previous works and provide a new perspective on randomized smoothing certification.
arXiv Detail & Related papers (2020-02-21T07:52:47Z) - Regularized Training and Tight Certification for Randomized Smoothed
Classifier with Provable Robustness [15.38718018477333]
We derive a new regularized risk, in which the regularizer can adaptively encourage the accuracy and robustness of the smoothed counterpart.
We also design a new certification algorithm, which can leverage the regularization effect to provide tighter robustness lower bound that holds with high probability.
arXiv Detail & Related papers (2020-02-17T20:54:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.