Conformalized Survival Distributions: A Generic Post-Process to Increase Calibration
- URL: http://arxiv.org/abs/2405.07374v2
- Date: Mon, 3 Jun 2024 03:32:56 GMT
- Title: Conformalized Survival Distributions: A Generic Post-Process to Increase Calibration
- Authors: Shi-ang Qi, Yakun Yu, Russell Greiner,
- Abstract summary: Discrimination and calibration represent two important properties of survival analysis.
With their distinct nature, it is hard for survival models to simultaneously optimize both of them.
This paper introduces a novel approach utilizing conformal regression that can improve a model's calibration without degrading discrimination.
- Score: 6.868842871753991
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Discrimination and calibration represent two important properties of survival analysis, with the former assessing the model's ability to accurately rank subjects and the latter evaluating the alignment of predicted outcomes with actual events. With their distinct nature, it is hard for survival models to simultaneously optimize both of them especially as many previous results found improving calibration tends to diminish discrimination performance. This paper introduces a novel approach utilizing conformal regression that can improve a model's calibration without degrading discrimination. We provide theoretical guarantees for the above claim, and rigorously validate the efficiency of our approach across 11 real-world datasets, showcasing its practical applicability and robustness in diverse scenarios.
Related papers
- Toward Conditional Distribution Calibration in Survival Prediction [6.868842871753991]
We propose a method based on conformal prediction that uses the model's predicted individual survival probability at that instance's observed time.
We provide theoretical guarantees for both marginal and conditional calibration and test it extensively across 15 diverse real-world datasets.
arXiv Detail & Related papers (2024-10-27T20:19:46Z) - Toward a Well-Calibrated Discrimination via Survival Outcome-Aware Contrastive Learning [6.963971634605796]
We propose a novel contrastive learning approach to enhance discrimination textitwithout sacrificing calibration.
Our method employs weighted sampling within a contrastive learning framework, assigning lower penalties to samples with similar survival outcomes.
Experiments on multiple real-world clinical datasets demonstrate that our method outperforms state-of-the-art deep survival models in both discrimination and calibration.
arXiv Detail & Related papers (2024-10-15T07:12:57Z) - MITA: Bridging the Gap between Model and Data for Test-time Adaptation [68.62509948690698]
Test-Time Adaptation (TTA) has emerged as a promising paradigm for enhancing the generalizability of models.
We propose Meet-In-The-Middle based MITA, which introduces energy-based optimization to encourage mutual adaptation of the model and data from opposing directions.
arXiv Detail & Related papers (2024-10-12T07:02:33Z) - Towards Calibrated Robust Fine-Tuning of Vision-Language Models [97.19901765814431]
This work proposes a robust fine-tuning method that improves both OOD accuracy and confidence calibration simultaneously in vision language models.
We show that both OOD classification and OOD calibration errors have a shared upper bound consisting of two terms of ID data.
Based on this insight, we design a novel framework that conducts fine-tuning with a constrained multimodal contrastive loss enforcing a larger smallest singular value.
arXiv Detail & Related papers (2023-11-03T05:41:25Z) - Improving Adaptive Conformal Prediction Using Self-Supervised Learning [72.2614468437919]
We train an auxiliary model with a self-supervised pretext task on top of an existing predictive model and use the self-supervised error as an additional feature to estimate nonconformity scores.
We empirically demonstrate the benefit of the additional information using both synthetic and real data on the efficiency (width), deficit, and excess of conformal prediction intervals.
arXiv Detail & Related papers (2023-02-23T18:57:14Z) - On Calibrating Semantic Segmentation Models: Analyses and An Algorithm [51.85289816613351]
We study the problem of semantic segmentation calibration.
Model capacity, crop size, multi-scale testing, and prediction correctness have impact on calibration.
We propose a simple, unifying, and effective approach, namely selective scaling.
arXiv Detail & Related papers (2022-12-22T22:05:16Z) - Adaptive Dimension Reduction and Variational Inference for Transductive
Few-Shot Classification [2.922007656878633]
We propose a new clustering method based on Variational Bayesian inference, further improved by Adaptive Dimension Reduction.
Our proposed method significantly improves accuracy in the realistic unbalanced transductive setting on various Few-Shot benchmarks.
arXiv Detail & Related papers (2022-09-18T10:29:02Z) - Deep Learning-Based Discrete Calibrated Survival Prediction [0.0]
We present Discrete Calibrated Survival (DCS), a novel deep neural network for discriminated and calibrated survival prediction.
The enhanced performance of DCS can be attributed to two novel features, the variable temporal output node spacing and the novel loss term.
We believe DCS is an important step towards clinical application of deep-learning-based survival prediction with state-of-the-art discrimination and good calibration.
arXiv Detail & Related papers (2022-08-17T09:40:07Z) - Learning Prediction Intervals for Regression: Generalization and
Calibration [12.576284277353606]
We study the generation of prediction intervals in regression for uncertainty quantification.
We use a general learning theory to characterize the optimality-feasibility tradeoff that encompasses Lipschitz continuity and VC-subgraph classes.
We empirically demonstrate the strengths of our interval generation and calibration algorithms in terms of testing performances compared to existing benchmarks.
arXiv Detail & Related papers (2021-02-26T17:55:30Z) - Privacy Preserving Recalibration under Domain Shift [119.21243107946555]
We introduce a framework that abstracts out the properties of recalibration problems under differential privacy constraints.
We also design a novel recalibration algorithm, accuracy temperature scaling, that outperforms prior work on private datasets.
arXiv Detail & Related papers (2020-08-21T18:43:37Z) - Decomposed Adversarial Learned Inference [118.27187231452852]
We propose a novel approach, Decomposed Adversarial Learned Inference (DALI)
DALI explicitly matches prior and conditional distributions in both data and code spaces.
We validate the effectiveness of DALI on the MNIST, CIFAR-10, and CelebA datasets.
arXiv Detail & Related papers (2020-04-21T20:00:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.