Robust Models are less Over-Confident
- URL: http://arxiv.org/abs/2210.05938v1
- Date: Wed, 12 Oct 2022 06:14:55 GMT
- Title: Robust Models are less Over-Confident
- Authors: Julia Grabinski, Paul Gavrikov, Janis Keuper, Margret Keuper
- Abstract summary: adversarial training (AT) aims to achieve robustness against such attacks.
We empirically analyze a variety of adversarially trained models that achieve high robust accuracies.
AT has an interesting side-effect: it leads to models that are significantly less overconfident with their decisions.
- Score: 10.42820615166362
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the success of convolutional neural networks (CNNs) in many academic
benchmarks for computer vision tasks, their application in the real-world is
still facing fundamental challenges. One of these open problems is the inherent
lack of robustness, unveiled by the striking effectiveness of adversarial
attacks. Current attack methods are able to manipulate the network's prediction
by adding specific but small amounts of noise to the input. In turn,
adversarial training (AT) aims to achieve robustness against such attacks and
ideally a better model generalization ability by including adversarial samples
in the trainingset. However, an in-depth analysis of the resulting robust
models beyond adversarial robustness is still pending. In this paper, we
empirically analyze a variety of adversarially trained models that achieve high
robust accuracies when facing state-of-the-art attacks and we show that AT has
an interesting side-effect: it leads to models that are significantly less
overconfident with their decisions, even on clean data than non-robust models.
Further, our analysis of robust models shows that not only AT but also the
model's building blocks (like activation functions and pooling) have a strong
influence on the models' prediction confidences. Data & Project website:
https://github.com/GeJulia/robustness_confidences_evaluation
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.