Improved Trainable Calibration Method for Neural Networks on Medical
Imaging Classification
- URL: http://arxiv.org/abs/2009.04057v1
- Date: Wed, 9 Sep 2020 01:25:53 GMT
- Title: Improved Trainable Calibration Method for Neural Networks on Medical
Imaging Classification
- Authors: Gongbo Liang, Yu Zhang, Xiaoqin Wang, Nathan Jacobs
- Abstract summary: Empirically, neural networks are often miscalibrated and overconfident in their predictions.
We propose a novel calibration approach that maintains the overall classification accuracy while significantly improving model calibration.
- Score: 17.941506832422192
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent works have shown that deep neural networks can achieve super-human
performance in a wide range of image classification tasks in the medical
imaging domain. However, these works have primarily focused on classification
accuracy, ignoring the important role of uncertainty quantification.
Empirically, neural networks are often miscalibrated and overconfident in their
predictions. This miscalibration could be problematic in any automatic
decision-making system, but we focus on the medical field in which neural
network miscalibration has the potential to lead to significant treatment
errors. We propose a novel calibration approach that maintains the overall
classification accuracy while significantly improving model calibration. The
proposed approach is based on expected calibration error, which is a common
metric for quantifying miscalibration. Our approach can be easily integrated
into any classification task as an auxiliary loss term, thus not requiring an
explicit training round for calibration. We show that our approach reduces
calibration error significantly across various architectures and datasets.
Related papers
- Calibrating Deep Neural Network using Euclidean Distance [5.675312975435121]
In machine learning, Focal Loss is commonly used to reduce misclassification rates by emphasizing hard-to-classify samples.
High calibration error indicates a misalignment between predicted probabilities and actual outcomes, affecting model reliability.
This research introduces a novel loss function called Focal Loss (FCL), designed to improve probability calibration while retaining the advantages of Focal Loss in handling difficult samples.
arXiv Detail & Related papers (2024-10-23T23:06:50Z) - Decoupling Feature Extraction and Classification Layers for Calibrated Neural Networks [3.5284544394841117]
We show that decoupling the training of feature extraction layers and classification layers in over-parametrized DNN architectures significantly improves model calibration.
We illustrate these methods improve calibration across ViT and WRN architectures for several image classification benchmark datasets.
arXiv Detail & Related papers (2024-05-02T11:36:17Z) - On the calibration of neural networks for histological slide-level
classification [47.99822253865054]
We compare three neural network architectures that combine feature representations on patch-level to a slide-level prediction with respect to their classification performance.
We observe that Transformers lead to good results in terms of classification performance and calibration.
arXiv Detail & Related papers (2023-12-15T11:46:29Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - Calibration of Neural Networks [77.34726150561087]
This paper presents a survey of confidence calibration problems in the context of neural networks.
We analyze problem statement, calibration definitions, and different approaches to evaluation.
Empirical experiments cover various datasets and models, comparing calibration methods according to different criteria.
arXiv Detail & Related papers (2023-03-19T20:27:51Z) - Multi-Head Multi-Loss Model Calibration [13.841172927454204]
We introduce a form of simplified ensembling that bypasses the costly training and inference of deep ensembles.
Specifically, each head is trained to minimize a weighted Cross-Entropy loss, but the weights are different among the different branches.
We show that the resulting averaged predictions can achieve excellent calibration without sacrificing accuracy in two challenging datasets.
arXiv Detail & Related papers (2023-03-02T09:32:32Z) - DOMINO: Domain-aware Model Calibration in Medical Image Segmentation [51.346121016559024]
Modern deep neural networks are poorly calibrated, compromising trustworthiness and reliability.
We propose DOMINO, a domain-aware model calibration method that leverages the semantic confusability and hierarchical similarity between class labels.
Our results show that DOMINO-calibrated deep neural networks outperform non-calibrated models and state-of-the-art morphometric methods in head image segmentation.
arXiv Detail & Related papers (2022-09-13T15:31:52Z) - Meta-Calibration: Learning of Model Calibration Using Differentiable
Expected Calibration Error [46.12703434199988]
We introduce a new differentiable surrogate for expected calibration error (DECE) that allows calibration quality to be directly optimised.
We also propose a meta-learning framework that uses DECE to optimise for validation set calibration.
arXiv Detail & Related papers (2021-06-17T15:47:50Z) - Post-hoc Calibration of Neural Networks by g-Layers [51.42640515410253]
In recent years, there is a surge of research on neural network calibration.
It is known that minimizing Negative Log-Likelihood (NLL) will lead to a calibrated network on the training set if the global optimum is attained.
We prove that even though the base network ($f$) does not lead to the global optimum of NLL, by adding additional layers ($g$) and minimizing NLL by optimizing the parameters of $g$ one can obtain a calibrated network.
arXiv Detail & Related papers (2020-06-23T07:55:10Z) - Intra Order-preserving Functions for Calibration of Multi-Class Neural
Networks [54.23874144090228]
A common approach is to learn a post-hoc calibration function that transforms the output of the original network into calibrated confidence scores.
Previous post-hoc calibration techniques work only with simple calibration functions.
We propose a new neural network architecture that represents a class of intra order-preserving functions.
arXiv Detail & Related papers (2020-03-15T12:57:21Z) - Calibrating Deep Neural Networks using Focal Loss [77.92765139898906]
Miscalibration is a mismatch between a model's confidence and its correctness.
We show that focal loss allows us to learn models that are already very well calibrated.
We show that our approach achieves state-of-the-art calibration without compromising on accuracy in almost all cases.
arXiv Detail & Related papers (2020-02-21T17:35:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.