DOMINO: Domain-aware Loss for Deep Learning Calibration
- URL: http://arxiv.org/abs/2302.05142v1
- Date: Fri, 10 Feb 2023 09:47:46 GMT
- Title: DOMINO: Domain-aware Loss for Deep Learning Calibration
- Authors: Skylar E. Stolte, Kyle Volle, Aprinda Indahlastari, Alejandro Albizu,
Adam J. Woods, Kevin Brink, Matthew Hale, and Ruogu Fang
- Abstract summary: This paper proposes a novel domain-aware loss function to calibrate deep learning models.
The proposed loss function applies a class-wise penalty based on the similarity between classes within a given target domain.
- Score: 49.485186880996125
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has achieved the state-of-the-art performance across medical
imaging tasks; however, model calibration is often not considered. Uncalibrated
models are potentially dangerous in high-risk applications since the user does
not know when they will fail. Therefore, this paper proposes a novel
domain-aware loss function to calibrate deep learning models. The proposed loss
function applies a class-wise penalty based on the similarity between classes
within a given target domain. Thus, the approach improves the calibration while
also ensuring that the model makes less risky errors even when incorrect. The
code for this software is available at https://github.com/lab-smile/DOMINO.
Related papers
- Calibrating Deep Neural Network using Euclidean Distance [5.675312975435121]
In machine learning, Focal Loss is commonly used to reduce misclassification rates by emphasizing hard-to-classify samples.
High calibration error indicates a misalignment between predicted probabilities and actual outcomes, affecting model reliability.
This research introduces a novel loss function called Focal Loss (FCL), designed to improve probability calibration while retaining the advantages of Focal Loss in handling difficult samples.
arXiv Detail & Related papers (2024-10-23T23:06:50Z) - The Perils of Optimizing Learned Reward Functions: Low Training Error Does Not Guarantee Low Regret [64.04721528586747]
In reinforcement learning, specifying reward functions that capture the intended task can be very challenging.
In this paper, we mathematically show that a sufficiently low expected test error of the reward model guarantees low worst-case regret.
We then show that similar problems persist even when using policy regularization techniques, commonly employed in methods such as RLHF.
arXiv Detail & Related papers (2024-06-22T06:43:51Z) - Applying Deep Learning to Calibrate Stochastic Volatility Models [0.0]
We develop a Differential Machine Learning (DML) approach to price vanilla European options.
The trained neural network dramatically reduces Heston calibration's time.
We compare their performance in reducing overfitting and improving the generalisation error.
arXiv Detail & Related papers (2023-09-14T16:38:39Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - Scaling Forward Gradient With Local Losses [117.22685584919756]
Forward learning is a biologically plausible alternative to backprop for learning deep neural networks.
We show that it is possible to substantially reduce the variance of the forward gradient by applying perturbations to activations rather than weights.
Our approach matches backprop on MNIST and CIFAR-10 and significantly outperforms previously proposed backprop-free algorithms on ImageNet.
arXiv Detail & Related papers (2022-10-07T03:52:27Z) - DOMINO: Domain-aware Model Calibration in Medical Image Segmentation [51.346121016559024]
Modern deep neural networks are poorly calibrated, compromising trustworthiness and reliability.
We propose DOMINO, a domain-aware model calibration method that leverages the semantic confusability and hierarchical similarity between class labels.
Our results show that DOMINO-calibrated deep neural networks outperform non-calibrated models and state-of-the-art morphometric methods in head image segmentation.
arXiv Detail & Related papers (2022-09-13T15:31:52Z) - Calibrating Deep Neural Networks using Focal Loss [77.92765139898906]
Miscalibration is a mismatch between a model's confidence and its correctness.
We show that focal loss allows us to learn models that are already very well calibrated.
We show that our approach achieves state-of-the-art calibration without compromising on accuracy in almost all cases.
arXiv Detail & Related papers (2020-02-21T17:35:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.