SmoothMix: Training Confidence-calibrated Smoothed Classifiers for
Certified Robustness
- URL: http://arxiv.org/abs/2111.09277v1
- Date: Wed, 17 Nov 2021 18:20:59 GMT
- Title: SmoothMix: Training Confidence-calibrated Smoothed Classifiers for
Certified Robustness
- Authors: Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, Doguk Kim,
Jinwoo Shin
- Abstract summary: We propose a training scheme, coined SmoothMix, to control the robustness of smoothed classifiers via self-mixup.
The proposed procedure effectively identifies over-confident, near off-class samples as a cause of limited robustness.
Our experimental results demonstrate that the proposed method can significantly improve the certified $ell$-robustness of smoothed classifiers.
- Score: 61.212486108346695
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Randomized smoothing is currently a state-of-the-art method to construct a
certifiably robust classifier from neural networks against $\ell_2$-adversarial
perturbations. Under the paradigm, the robustness of a classifier is aligned
with the prediction confidence, i.e., the higher confidence from a smoothed
classifier implies the better robustness. This motivates us to rethink the
fundamental trade-off between accuracy and robustness in terms of calibrating
confidences of a smoothed classifier. In this paper, we propose a simple
training scheme, coined SmoothMix, to control the robustness of smoothed
classifiers via self-mixup: it trains on convex combinations of samples along
the direction of adversarial perturbation for each input. The proposed
procedure effectively identifies over-confident, near off-class samples as a
cause of limited robustness in case of smoothed classifiers, and offers an
intuitive way to adaptively set a new decision boundary between these samples
for better robustness. Our experimental results demonstrate that the proposed
method can significantly improve the certified $\ell_2$-robustness of smoothed
classifiers compared to existing state-of-the-art robust training methods.
Related papers
- Mixing Classifiers to Alleviate the Accuracy-Robustness Trade-Off [8.169499497403102]
We propose a theoretically motivated formulation that mixes the output probabilities of a standard neural network and a robust neural network.
Our numerical experiments verify that the mixed classifier noticeably improves the accuracy-robustness trade-off.
arXiv Detail & Related papers (2023-11-26T02:25:30Z) - Multi-scale Diffusion Denoised Smoothing [79.95360025953931]
randomized smoothing has become one of a few tangible approaches that offers adversarial robustness to models at scale.
We present scalable methods to address the current trade-off between certified robustness and accuracy in denoised smoothing.
Our experiments show that the proposed multi-scale smoothing scheme combined with diffusion fine-tuning enables strong certified robustness available with high noise level.
arXiv Detail & Related papers (2023-10-25T17:11:21Z) - Promoting Robustness of Randomized Smoothing: Two Cost-Effective
Approaches [28.87505826018613]
We propose two cost-effective approaches to boost robustness of randomized smoothing while preserving its clean performance.
The first approach introduces a new robust training method AdvMacer which combines adversarial training and certification for randomized smoothing.
The second approach introduces a post-processing method EsbRS which greatly improves the robustness certificate based on building model ensembles.
arXiv Detail & Related papers (2023-10-11T18:06:05Z) - The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing [85.85160896547698]
Real-life applications of deep neural networks are hindered by their unsteady predictions when faced with noisy inputs and adversarial attacks.
We show how to design an efficient classifier with a certified radius by relying on noise injection into the inputs.
Our novel certification procedure allows us to use pre-trained models with randomized smoothing, effectively improving the current certification radius in a zero-shot manner.
arXiv Detail & Related papers (2023-09-28T22:41:47Z) - Confidence-aware Training of Smoothed Classifiers for Certified
Robustness [75.95332266383417]
We use "accuracy under Gaussian noise" as an easy-to-compute proxy of adversarial robustness for an input.
Our experiments show that the proposed method consistently exhibits improved certified robustness upon state-of-the-art training methods.
arXiv Detail & Related papers (2022-12-18T03:57:12Z) - Consistency Regularization for Certified Robustness of Smoothed
Classifiers [89.72878906950208]
A recent technique of randomized smoothing has shown that the worst-case $ell$-robustness can be transformed into the average-case robustness.
We found that the trade-off between accuracy and certified robustness of smoothed classifiers can be greatly controlled by simply regularizing the prediction consistency over noise.
arXiv Detail & Related papers (2020-06-07T06:57:43Z) - Hidden Cost of Randomized Smoothing [72.93630656906599]
In this paper, we point out the side effects of current randomized smoothing.
Specifically, we articulate and prove two major points: 1) the decision boundaries of smoothed classifiers will shrink, resulting in disparity in class-wise accuracy; 2) applying noise augmentation in the training process does not necessarily resolve the shrinking issue due to the inconsistent learning objectives.
arXiv Detail & Related papers (2020-03-02T23:37:42Z) - Regularized Training and Tight Certification for Randomized Smoothed
Classifier with Provable Robustness [15.38718018477333]
We derive a new regularized risk, in which the regularizer can adaptively encourage the accuracy and robustness of the smoothed counterpart.
We also design a new certification algorithm, which can leverage the regularization effect to provide tighter robustness lower bound that holds with high probability.
arXiv Detail & Related papers (2020-02-17T20:54:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.