Scaling of Class-wise Training Losses for Post-hoc Calibration
- URL: http://arxiv.org/abs/2306.10989v1
- Date: Mon, 19 Jun 2023 14:59:37 GMT
- Title: Scaling of Class-wise Training Losses for Post-hoc Calibration
- Authors: Seungjin Jung, Seungmo Seo, Yonghyun Jeong, Jongwon Choi
- Abstract summary: We propose a new calibration method to synchronize the class-wise training losses.
We design a new training loss to alleviate the variance of class-wise training losses by using multiple class-wise scaling factors.
We validate the proposed framework by employing it in the various post-hoc calibration methods.
- Score: 6.0632746602205865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The class-wise training losses often diverge as a result of the various
levels of intra-class and inter-class appearance variation, and we find that
the diverging class-wise training losses cause the uncalibrated prediction with
its reliability. To resolve the issue, we propose a new calibration method to
synchronize the class-wise training losses. We design a new training loss to
alleviate the variance of class-wise training losses by using multiple
class-wise scaling factors. Since our framework can compensate the training
losses of overfitted classes with those of under-fitted classes, the integrated
training loss is preserved, preventing the performance drop even after the
model calibration. Furthermore, our method can be easily employed in the
post-hoc calibration methods, allowing us to use the pre-trained model as an
initial model and reduce the additional computation for model calibration. We
validate the proposed framework by employing it in the various post-hoc
calibration methods, which generally improves calibration performance while
preserving accuracy, and discover through the investigation that our approach
performs well with unbalanced datasets and untuned hyperparameters.
Related papers
- Scaling Laws for Precision [73.24325358259753]
We devise "precision-aware" scaling laws for both training and inference.
For inference, we find that the degradation introduced by post-training quantization increases as models are trained on more data.
For training, our scaling laws allow us to predict the loss of a model with different parts in different precisions.
arXiv Detail & Related papers (2024-11-07T00:10:10Z) - Optimizing Estimators of Squared Calibration Errors in Classification [2.3020018305241337]
We propose a mean-squared error-based risk that enables the comparison and optimization of estimators of squared calibration errors.
Our approach advocates for a training-validation-testing pipeline when estimating a calibration error.
arXiv Detail & Related papers (2024-10-09T15:58:06Z) - Probabilistic Calibration by Design for Neural Network Regression [2.3020018305241337]
We introduce a novel end-to-end model training procedure called Quantile Recalibration Training.
We demonstrate the performance of our method in a large-scale experiment involving 57 regression datasets.
arXiv Detail & Related papers (2024-03-18T17:04:33Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Calibration by Distribution Matching: Trainable Kernel Calibration
Metrics [56.629245030893685]
We introduce kernel-based calibration metrics that unify and generalize popular forms of calibration for both classification and regression.
These metrics admit differentiable sample estimates, making it easy to incorporate a calibration objective into empirical risk minimization.
We provide intuitive mechanisms to tailor calibration metrics to a decision task, and enforce accurate loss estimation and no regret decisions.
arXiv Detail & Related papers (2023-10-31T06:19:40Z) - Multi-Head Multi-Loss Model Calibration [13.841172927454204]
We introduce a form of simplified ensembling that bypasses the costly training and inference of deep ensembles.
Specifically, each head is trained to minimize a weighted Cross-Entropy loss, but the weights are different among the different branches.
We show that the resulting averaged predictions can achieve excellent calibration without sacrificing accuracy in two challenging datasets.
arXiv Detail & Related papers (2023-03-02T09:32:32Z) - Bag of Tricks for In-Distribution Calibration of Pretrained Transformers [8.876196316390493]
We present an empirical study on confidence calibration for pre-trained language models (PLMs)
We find that the ensemble model overfitted to the training set shows sub-par calibration performance.
We propose the Calibrated PLM (CALL), a combination of calibration techniques.
arXiv Detail & Related papers (2023-02-13T21:11:52Z) - On Calibrating Semantic Segmentation Models: Analyses and An Algorithm [51.85289816613351]
We study the problem of semantic segmentation calibration.
Model capacity, crop size, multi-scale testing, and prediction correctness have impact on calibration.
We propose a simple, unifying, and effective approach, namely selective scaling.
arXiv Detail & Related papers (2022-12-22T22:05:16Z) - Modular Conformal Calibration [80.33410096908872]
We introduce a versatile class of algorithms for recalibration in regression.
This framework allows one to transform any regression model into a calibrated probabilistic model.
We conduct an empirical study of MCC on 17 regression datasets.
arXiv Detail & Related papers (2022-06-23T03:25:23Z) - Unsupervised Calibration under Covariate Shift [92.02278658443166]
We introduce the problem of calibration under domain shift and propose an importance sampling based approach to address it.
We evaluate and discuss the efficacy of our method on both real-world datasets and synthetic datasets.
arXiv Detail & Related papers (2020-06-29T21:50:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.