SmoothMix: Training Confidence-calibrated Smoothed Classifiers for
Certified Robustness
- URL: http://arxiv.org/abs/2111.09277v1
- Date: Wed, 17 Nov 2021 18:20:59 GMT
- Title: SmoothMix: Training Confidence-calibrated Smoothed Classifiers for
Certified Robustness
- Authors: Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, Doguk Kim,
Jinwoo Shin
- Abstract summary: We propose a training scheme, coined SmoothMix, to control the robustness of smoothed classifiers via self-mixup.
The proposed procedure effectively identifies over-confident, near off-class samples as a cause of limited robustness.
Our experimental results demonstrate that the proposed method can significantly improve the certified $ell$-robustness of smoothed classifiers.
- Score: 61.212486108346695
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Randomized smoothing is currently a state-of-the-art method to construct a
certifiably robust classifier from neural networks against $\ell_2$-adversarial
perturbations. Under the paradigm, the robustness of a classifier is aligned
with the prediction confidence, i.e., the higher confidence from a smoothed
classifier implies the better robustness. This motivates us to rethink the
fundamental trade-off between accuracy and robustness in terms of calibrating
confidences of a smoothed classifier. In this paper, we propose a simple
training scheme, coined SmoothMix, to control the robustness of smoothed
classifiers via self-mixup: it trains on convex combinations of samples along
the direction of adversarial perturbation for each input. The proposed
procedure effectively identifies over-confident, near off-class samples as a
cause of limited robustness in case of smoothed classifiers, and offers an
intuitive way to adaptively set a new decision boundary between these samples
for better robustness. Our experimental results demonstrate that the proposed
method can significantly improve the certified $\ell_2$-robustness of smoothed
classifiers compared to existing state-of-the-art robust training methods.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.