Can Calibration Improve Sample Prioritization?
- URL: http://arxiv.org/abs/2210.06592v1
- Date: Wed, 12 Oct 2022 21:11:08 GMT
- Title: Can Calibration Improve Sample Prioritization?
- Authors: Ganesh Tata, Gautham Krishna Gudur, Gopinath Chennupati, Mohammad
Emtiyaz Khan
- Abstract summary: We study the effect of popular calibration techniques in selecting better subsets of samples during training.
We observe that calibration can improve the quality of subsets, reduce the number of examples per epoch, and can thereby speed up the overall training process.
- Score: 15.17599622490369
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Calibration can reduce overconfident predictions of deep neural networks, but
can calibration also accelerate training by selecting the right samples? In
this paper, we show that it can. We study the effect of popular calibration
techniques in selecting better subsets of samples during training (also called
sample prioritization) and observe that calibration can improve the quality of
subsets, reduce the number of examples per epoch (by at least 70%), and can
thereby speed up the overall training process. We further study the effect of
using calibrated pre-trained models coupled with calibration during training to
guide sample prioritization, which again seems to improve the quality of
samples selected.
Related papers
- Calibrating Where It Matters: Constrained Temperature Scaling [0.0]
Clinical decision makers can use calibrated classifiers to minimise expected costs given their own cost function.
We demonstrate improved calibration where it matters using convnets trained to classify dermoscopy images.
arXiv Detail & Related papers (2024-06-17T12:14:31Z) - C-TPT: Calibrated Test-Time Prompt Tuning for Vision-Language Models via Text Feature Dispersion [54.81141583427542]
In deep learning, test-time adaptation has gained attention as a method for model fine-tuning without the need for labeled data.
This paper explores calibration during test-time prompt tuning by leveraging the inherent properties of CLIP.
We present a novel method, Calibrated Test-time Prompt Tuning (C-TPT), for optimizing prompts during test-time with enhanced calibration.
arXiv Detail & Related papers (2024-03-21T04:08:29Z) - Calibration by Distribution Matching: Trainable Kernel Calibration
Metrics [56.629245030893685]
We introduce kernel-based calibration metrics that unify and generalize popular forms of calibration for both classification and regression.
These metrics admit differentiable sample estimates, making it easy to incorporate a calibration objective into empirical risk minimization.
We provide intuitive mechanisms to tailor calibration metrics to a decision task, and enforce accurate loss estimation and no regret decisions.
arXiv Detail & Related papers (2023-10-31T06:19:40Z) - Calibration of Neural Networks [77.34726150561087]
This paper presents a survey of confidence calibration problems in the context of neural networks.
We analyze problem statement, calibration definitions, and different approaches to evaluation.
Empirical experiments cover various datasets and models, comparing calibration methods according to different criteria.
arXiv Detail & Related papers (2023-03-19T20:27:51Z) - Post-hoc Uncertainty Calibration for Domain Drift Scenarios [46.88826364244423]
We show that existing post-hoc calibration methods yield highly over-confident predictions under domain shift.
We introduce a simple strategy where perturbations are applied to samples in the validation set before performing the post-hoc calibration step.
arXiv Detail & Related papers (2020-12-20T18:21:13Z) - Diverse Ensembles Improve Calibration [14.678791405731486]
We propose a simple technique to improve calibration, using a different data augmentation for each ensemble member.
We additionally use the idea of mixing' un-augmented and augmented inputs to improve calibration when test and training distributions are the same.
arXiv Detail & Related papers (2020-07-08T15:48:12Z) - Unsupervised Calibration under Covariate Shift [92.02278658443166]
We introduce the problem of calibration under domain shift and propose an importance sampling based approach to address it.
We evaluate and discuss the efficacy of our method on both real-world datasets and synthetic datasets.
arXiv Detail & Related papers (2020-06-29T21:50:07Z) - Multi-Class Uncertainty Calibration via Mutual Information
Maximization-based Binning [8.780958735684958]
Post-hoc multi-class calibration is a common approach for providing confidence estimates of deep neural network predictions.
Recent work has shown that widely used scaling methods underestimate their calibration error.
We propose a shared class-wise (sCW) calibration strategy, sharing one calibrator among similar classes.
arXiv Detail & Related papers (2020-06-23T15:31:59Z) - Robust Sampling in Deep Learning [62.997667081978825]
Deep learning requires regularization mechanisms to reduce overfitting and improve generalization.
We address this problem by a new regularization method based on distributional robust optimization.
During the training, the selection of samples is done according to their accuracy in such a way that the worst performed samples are the ones that contribute the most in the optimization.
arXiv Detail & Related papers (2020-06-04T09:46:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.