Adaptive Temperature Scaling for Robust Calibration of Deep Neural
Networks
- URL: http://arxiv.org/abs/2208.00461v1
- Date: Sun, 31 Jul 2022 16:20:06 GMT
- Title: Adaptive Temperature Scaling for Robust Calibration of Deep Neural
Networks
- Authors: Sergio A. Balanya, Juan Maro\~nas and Daniel Ramos
- Abstract summary: We focus on the task of confidence scaling, specifically on post-hoc methods that generalize Temperature Scaling.
We show that when there is plenty of data complex models like neural networks yield better performance, but are prone to fail when the amount of data is limited.
We propose Entropy-based Temperature Scaling, a simple method that scales the confidence of a prediction according to its entropy.
- Score: 0.7219077740523682
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we study the post-hoc calibration of modern neural networks, a
problem that has drawn a lot of attention in recent years. Many calibration
methods of varying complexity have been proposed for the task, but there is no
consensus about how expressive these should be. We focus on the task of
confidence scaling, specifically on post-hoc methods that generalize
Temperature Scaling, we call these the Adaptive Temperature Scaling family. We
analyse expressive functions that improve calibration and propose interpretable
methods. We show that when there is plenty of data complex models like neural
networks yield better performance, but are prone to fail when the amount of
data is limited, a common situation in certain post-hoc calibration
applications like medical diagnosis. We study the functions that expressive
methods learn under ideal conditions and design simpler methods but with a
strong inductive bias towards these well-performing functions. Concretely, we
propose Entropy-based Temperature Scaling, a simple method that scales the
confidence of a prediction according to its entropy. Results show that our
method obtains state-of-the-art performance when compared to others and, unlike
complex models, it is robust against data scarcity. Moreover, our proposed
model enables a deeper interpretation of the calibration process.
Related papers
- Improving Predictor Reliability with Selective Recalibration [15.319277333431318]
Recalibration is one of the most effective ways to produce reliable confidence estimates with a pre-trained model.
We propose textitselective recalibration, where a selection model learns to reject some user-chosen proportion of the data.
Our results show that selective recalibration consistently leads to significantly lower calibration error than a wide range of selection and recalibration baselines.
arXiv Detail & Related papers (2024-10-07T18:17:31Z) - On the Limitations of Temperature Scaling for Distributions with
Overlaps [8.486166869140929]
We show that for empirical risk minimizers for a general set of distributions, the performance of temperature scaling degrades with the amount of overlap between classes.
We prove that optimizing a modified form of the empirical risk induced by the Mixup data augmentation technique can in fact lead to reasonably good calibration performance.
arXiv Detail & Related papers (2023-06-01T14:35:28Z) - Calibration of Neural Networks [77.34726150561087]
This paper presents a survey of confidence calibration problems in the context of neural networks.
We analyze problem statement, calibration definitions, and different approaches to evaluation.
Empirical experiments cover various datasets and models, comparing calibration methods according to different criteria.
arXiv Detail & Related papers (2023-03-19T20:27:51Z) - Sample-dependent Adaptive Temperature Scaling for Improved Calibration [95.7477042886242]
Post-hoc approach to compensate for neural networks being wrong is to perform temperature scaling.
We propose to predict a different temperature value for each input, allowing us to adjust the mismatch between confidence and accuracy.
We test our method on the ResNet50 and WideResNet28-10 architectures using the CIFAR10/100 and Tiny-ImageNet datasets.
arXiv Detail & Related papers (2022-07-13T14:13:49Z) - Deep learning: a statistical viewpoint [120.94133818355645]
Deep learning has revealed some major surprises from a theoretical perspective.
In particular, simple gradient methods easily find near-perfect solutions to non-optimal training problems.
We conjecture that specific principles underlie these phenomena.
arXiv Detail & Related papers (2021-03-16T16:26:36Z) - Parameterized Temperature Scaling for Boosting the Expressive Power in
Post-Hoc Uncertainty Calibration [57.568461777747515]
We introduce a novel calibration method, Parametrized Temperature Scaling (PTS)
We demonstrate that the performance of accuracy-preserving state-of-the-art post-hoc calibrators is limited by their intrinsic expressive power.
We show with extensive experiments that our novel accuracy-preserving approach consistently outperforms existing algorithms across a large number of model architectures, datasets and metrics.
arXiv Detail & Related papers (2021-02-24T10:18:30Z) - Uncertainty Quantification and Deep Ensembles [79.4957965474334]
We show that deep-ensembles do not necessarily lead to improved calibration properties.
We show that standard ensembling methods, when used in conjunction with modern techniques such as mixup regularization, can lead to less calibrated models.
This text examines the interplay between three of the most simple and commonly used approaches to leverage deep learning when data is scarce.
arXiv Detail & Related papers (2020-07-17T07:32:24Z) - Calibrating Deep Neural Networks using Focal Loss [77.92765139898906]
Miscalibration is a mismatch between a model's confidence and its correctness.
We show that focal loss allows us to learn models that are already very well calibrated.
We show that our approach achieves state-of-the-art calibration without compromising on accuracy in almost all cases.
arXiv Detail & Related papers (2020-02-21T17:35:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.