On the Dark Side of Calibration for Modern Neural Networks
- URL: http://arxiv.org/abs/2106.09385v1
- Date: Thu, 17 Jun 2021 11:04:14 GMT
- Title: On the Dark Side of Calibration for Modern Neural Networks
- Authors: Aditya Singh, Alessandro Bay, Biswa Sengupta, Andrea Mirabile
- Abstract summary: We show the breakdown of expected calibration error (ECE) into predicted confidence and refinement.
We highlight that regularisation based calibration only focuses on naively reducing a model's confidence.
We find that many calibration approaches with the likes of label smoothing, mixup etc. lower the utility of a DNN by degrading its refinement.
- Score: 65.83956184145477
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Modern neural networks are highly uncalibrated. It poses a significant
challenge for safety-critical systems to utilise deep neural networks (DNNs),
reliably. Many recently proposed approaches have demonstrated substantial
progress in improving DNN calibration. However, they hardly touch upon
refinement, which historically has been an essential aspect of calibration.
Refinement indicates separability of a network's correct and incorrect
predictions. This paper presents a theoretically and empirically supported
exposition for reviewing a model's calibration and refinement. Firstly, we show
the breakdown of expected calibration error (ECE), into predicted confidence
and refinement. Connecting with this result, we highlight that regularisation
based calibration only focuses on naively reducing a model's confidence. This
logically has a severe downside to a model's refinement. We support our claims
through rigorous empirical evaluations of many state of the art calibration
approaches on standard datasets. We find that many calibration approaches with
the likes of label smoothing, mixup etc. lower the utility of a DNN by
degrading its refinement. Even under natural data shift, this
calibration-refinement trade-off holds for the majority of calibration methods.
These findings call for an urgent retrospective into some popular pathways
taken for modern DNN calibration.
Related papers
- Feature Clipping for Uncertainty Calibration [24.465567005078135]
Modern deep neural networks (DNNs) often suffer from overconfidence, leading to miscalibration.
We propose a novel post-hoc calibration method called feature clipping (FC) to address this issue.
FC involves clipping feature values to a specified threshold, effectively increasing entropy in high calibration error samples.
arXiv Detail & Related papers (2024-10-16T06:44:35Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - DOMINO: Domain-aware Model Calibration in Medical Image Segmentation [51.346121016559024]
Modern deep neural networks are poorly calibrated, compromising trustworthiness and reliability.
We propose DOMINO, a domain-aware model calibration method that leverages the semantic confusability and hierarchical similarity between class labels.
Our results show that DOMINO-calibrated deep neural networks outperform non-calibrated models and state-of-the-art morphometric methods in head image segmentation.
arXiv Detail & Related papers (2022-09-13T15:31:52Z) - Meta-Calibration: Learning of Model Calibration Using Differentiable
Expected Calibration Error [46.12703434199988]
We introduce a new differentiable surrogate for expected calibration error (DECE) that allows calibration quality to be directly optimised.
We also propose a meta-learning framework that uses DECE to optimise for validation set calibration.
arXiv Detail & Related papers (2021-06-17T15:47:50Z) - Revisiting the Calibration of Modern Neural Networks [44.26439222399464]
Many instances of miscalibration in modern neural networks have been reported, suggesting a trend that newer, more accurate models produce poorly calibrated predictions.
We systematically relate model calibration and accuracy, and find that the most recent models, notably those not using convolutions, are among the best calibrated.
We also show that model size and amount of pretraining do not fully explain these differences, suggesting that architecture is a major determinant of calibration properties.
arXiv Detail & Related papers (2021-06-15T09:24:43Z) - Uncertainty Quantification and Deep Ensembles [79.4957965474334]
We show that deep-ensembles do not necessarily lead to improved calibration properties.
We show that standard ensembling methods, when used in conjunction with modern techniques such as mixup regularization, can lead to less calibrated models.
This text examines the interplay between three of the most simple and commonly used approaches to leverage deep learning when data is scarce.
arXiv Detail & Related papers (2020-07-17T07:32:24Z) - Post-hoc Calibration of Neural Networks by g-Layers [51.42640515410253]
In recent years, there is a surge of research on neural network calibration.
It is known that minimizing Negative Log-Likelihood (NLL) will lead to a calibrated network on the training set if the global optimum is attained.
We prove that even though the base network ($f$) does not lead to the global optimum of NLL, by adding additional layers ($g$) and minimizing NLL by optimizing the parameters of $g$ one can obtain a calibrated network.
arXiv Detail & Related papers (2020-06-23T07:55:10Z) - On Calibration of Mixup Training for Deep Neural Networks [1.6242924916178283]
We argue and provide empirical evidence that, due to its fundamentals, Mixup does not necessarily improve calibration.
Our loss is inspired by Bayes decision theory and introduces a new training framework for designing losses for probabilistic modelling.
We provide state-of-the-art accuracy with consistent improvements in calibration performance.
arXiv Detail & Related papers (2020-03-22T16:54:31Z) - Calibrating Deep Neural Networks using Focal Loss [77.92765139898906]
Miscalibration is a mismatch between a model's confidence and its correctness.
We show that focal loss allows us to learn models that are already very well calibrated.
We show that our approach achieves state-of-the-art calibration without compromising on accuracy in almost all cases.
arXiv Detail & Related papers (2020-02-21T17:35:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.