Calibrating Where It Matters: Constrained Temperature Scaling
- URL: http://arxiv.org/abs/2406.11456v1
- Date: Mon, 17 Jun 2024 12:14:31 GMT
- Title: Calibrating Where It Matters: Constrained Temperature Scaling
- Authors: Stephen McKenna, Jacob Carse,
- Abstract summary: Clinical decision makers can use calibrated classifiers to minimise expected costs given their own cost function.
We demonstrate improved calibration where it matters using convnets trained to classify dermoscopy images.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider calibration of convolutional classifiers for diagnostic decision making. Clinical decision makers can use calibrated classifiers to minimise expected costs given their own cost function. Such functions are usually unknown at training time. If minimising expected costs is the primary aim, algorithms should focus on tuning calibration in regions of probability simplex likely to effect decisions. We give an example, modifying temperature scaling calibration, and demonstrate improved calibration where it matters using convnets trained to classify dermoscopy images.
Related papers
- Towards Certification of Uncertainty Calibration under Adversarial Attacks [96.48317453951418]
We show that attacks can significantly harm calibration, and thus propose certified calibration as worst-case bounds on calibration under adversarial perturbations.
We propose novel calibration attacks and demonstrate how they can improve model calibration through textitadversarial calibration training
arXiv Detail & Related papers (2024-05-22T18:52:09Z) - Calibration by Distribution Matching: Trainable Kernel Calibration
Metrics [56.629245030893685]
We introduce kernel-based calibration metrics that unify and generalize popular forms of calibration for both classification and regression.
These metrics admit differentiable sample estimates, making it easy to incorporate a calibration objective into empirical risk minimization.
We provide intuitive mechanisms to tailor calibration metrics to a decision task, and enforce accurate loss estimation and no regret decisions.
arXiv Detail & Related papers (2023-10-31T06:19:40Z) - On Calibrating Semantic Segmentation Models: Analyses and An Algorithm [51.85289816613351]
We study the problem of semantic segmentation calibration.
Model capacity, crop size, multi-scale testing, and prediction correctness have impact on calibration.
We propose a simple, unifying, and effective approach, namely selective scaling.
arXiv Detail & Related papers (2022-12-22T22:05:16Z) - Hidden Heterogeneity: When to Choose Similarity-Based Calibration [12.788224825185633]
Black-box calibration methods are unable to detect subpopulations where calibration could improve prediction accuracy.
The paper proposes a quantitative measure for hidden heterogeneity (HH)
Experiments show that the improvements in calibration achieved by similarity-based calibration methods correlate with the amount of HH present and, given sufficient calibration data, generally exceed calibration achieved by global methods.
arXiv Detail & Related papers (2022-02-03T20:43:25Z) - Calibrating Predictions to Decisions: A Novel Approach to Multi-Class
Calibration [118.26862029820447]
We introduce a new notion -- emphdecision calibration -- that requires the predicted distribution and true distribution to be indistinguishable'' to a set of downstream decision-makers.
Decision calibration improves decision-making on skin lesions and ImageNet classification with modern neural network.
arXiv Detail & Related papers (2021-07-12T20:17:28Z) - Meta-Cal: Well-controlled Post-hoc Calibration by Ranking [23.253020991581963]
Post-hoc calibration is a technique to recalibrate a model, and its goal is to learn a calibration map.
Existing approaches mostly focus on constructing calibration maps with low calibration errors.
We study post-hoc calibration for multi-class classification under constraints, as a calibrator with a low calibration error does not necessarily mean it is useful in practice.
arXiv Detail & Related papers (2021-05-10T12:00:54Z) - Localized Calibration: Metrics and Recalibration [133.07044916594361]
We propose a fine-grained calibration metric that spans the gap between fully global and fully individualized calibration.
We then introduce a localized recalibration method, LoRe, that improves the LCE better than existing recalibration methods.
arXiv Detail & Related papers (2021-02-22T07:22:12Z) - Unsupervised Calibration under Covariate Shift [92.02278658443166]
We introduce the problem of calibration under domain shift and propose an importance sampling based approach to address it.
We evaluate and discuss the efficacy of our method on both real-world datasets and synthetic datasets.
arXiv Detail & Related papers (2020-06-29T21:50:07Z) - Better Classifier Calibration for Small Data Sets [0.0]
We show how generating more data for calibration is able to improve calibration algorithm performance.
The proposed approach adds computational cost but considering that the main use case is with small data sets this extra computational cost stays insignificant.
arXiv Detail & Related papers (2020-02-24T12:27:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.