Toward a Well-Calibrated Discrimination via Survival Outcome-Aware Contrastive Learning
- URL: http://arxiv.org/abs/2410.11340v2
- Date: Thu, 21 Nov 2024 06:16:16 GMT
- Title: Toward a Well-Calibrated Discrimination via Survival Outcome-Aware Contrastive Learning
- Authors: Dongjoon Lee, Hyeryn Park, Changhee Lee,
- Abstract summary: We propose a novel contrastive learning approach to enhance discrimination textitwithout sacrificing calibration.
Our method employs weighted sampling within a contrastive learning framework, assigning lower penalties to samples with similar survival outcomes.
Experiments on multiple real-world clinical datasets demonstrate that our method outperforms state-of-the-art deep survival models in both discrimination and calibration.
- Score: 6.963971634605796
- License:
- Abstract: Previous deep learning approaches for survival analysis have primarily relied on ranking losses to improve discrimination performance, which often comes at the expense of calibration performance. To address such an issue, we propose a novel contrastive learning approach specifically designed to enhance discrimination \textit{without} sacrificing calibration. Our method employs weighted sampling within a contrastive learning framework, assigning lower penalties to samples with similar survival outcomes. This aligns well with the assumption that patients with similar event times share similar clinical statuses. Consequently, when augmented with the commonly used negative log-likelihood loss, our approach significantly improves discrimination performance without directly manipulating the model outputs, thereby achieving better calibration. Experiments on multiple real-world clinical datasets demonstrate that our method outperforms state-of-the-art deep survival models in both discrimination and calibration. Through comprehensive ablation studies, we further validate the effectiveness of our approach through quantitative and qualitative analyses.
Related papers
- A Large-Scale Neutral Comparison Study of Survival Models on Low-Dimensional Data [7.199059106376138]
This work presents the first large-scale neutral benchmark experiment focused on single-event, right-censored, low-dimensional survival data.
We benchmark 18 models, ranging from classical statistical approaches to many common machine learning methods, on 32 publicly available datasets.
arXiv Detail & Related papers (2024-06-06T14:13:38Z) - Conformalized Survival Distributions: A Generic Post-Process to Increase Calibration [6.868842871753991]
Discrimination and calibration represent two important properties of survival analysis.
With their distinct nature, it is hard for survival models to simultaneously optimize both of them.
This paper introduces a novel approach utilizing conformal regression that can improve a model's calibration without degrading discrimination.
arXiv Detail & Related papers (2024-05-12T20:27:34Z) - Deep Learning-Based Discrete Calibrated Survival Prediction [0.0]
We present Discrete Calibrated Survival (DCS), a novel deep neural network for discriminated and calibrated survival prediction.
The enhanced performance of DCS can be attributed to two novel features, the variable temporal output node spacing and the novel loss term.
We believe DCS is an important step towards clinical application of deep-learning-based survival prediction with state-of-the-art discrimination and good calibration.
arXiv Detail & Related papers (2022-08-17T09:40:07Z) - Practical Insights of Repairing Model Problems on Image Classification [3.2932371462787513]
Additional training of a deep learning model can cause negative effects on the results, turning an initially positive sample into a negative one (degradation)
In this talk, we will present implications derived from a comparison of methods for reducing degradation.
The results imply that a practitioner should care about better method continuously considering dataset availability and life cycle of an AI system.
arXiv Detail & Related papers (2022-05-14T19:28:55Z) - Stabilizing Adversarially Learned One-Class Novelty Detection Using
Pseudo Anomalies [22.48845887819345]
anomaly scores have been formulated using reconstruction loss of the adversarially learned generators and/or classification loss of discriminators.
Unavailability of anomaly examples in the training data makes optimization of such networks challenging.
We propose a robust anomaly detection framework that overcomes such instability by transforming the fundamental role of the discriminator from identifying real vs. fake data to distinguishing good vs. bad quality reconstructions.
arXiv Detail & Related papers (2022-03-25T15:37:52Z) - SurvITE: Learning Heterogeneous Treatment Effects from Time-to-Event
Data [83.50281440043241]
We study the problem of inferring heterogeneous treatment effects from time-to-event data.
We propose a novel deep learning method for treatment-specific hazard estimation based on balancing representations.
arXiv Detail & Related papers (2021-10-26T20:13:17Z) - On the Robustness of Pretraining and Self-Supervision for a Deep
Learning-based Analysis of Diabetic Retinopathy [70.71457102672545]
We compare the impact of different training procedures for diabetic retinopathy grading.
We investigate different aspects such as quantitative performance, statistics of the learned feature representations, interpretability and robustness to image distortions.
Our results indicate that models from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions.
arXiv Detail & Related papers (2021-06-25T08:32:45Z) - Bootstrapping Your Own Positive Sample: Contrastive Learning With
Electronic Health Record Data [62.29031007761901]
This paper proposes a novel contrastive regularized clinical classification model.
We introduce two unique positive sampling strategies specifically tailored for EHR data.
Our framework yields highly competitive experimental results in predicting the mortality risk on real-world COVID-19 EHR data.
arXiv Detail & Related papers (2021-04-07T06:02:04Z) - Bayesian prognostic covariate adjustment [59.75318183140857]
Historical data about disease outcomes can be integrated into the analysis of clinical trials in many ways.
We build on existing literature that uses prognostic scores from a predictive model to increase the efficiency of treatment effect estimates.
arXiv Detail & Related papers (2020-12-24T05:19:03Z) - Increasing the efficiency of randomized trial estimates via linear
adjustment for a prognostic score [59.75318183140857]
Estimating causal effects from randomized experiments is central to clinical research.
Most methods for historical borrowing achieve reductions in variance by sacrificing strict type-I error rate control.
arXiv Detail & Related papers (2020-12-17T21:10:10Z) - Compressing Large Sample Data for Discriminant Analysis [78.12073412066698]
We consider the computational issues due to large sample size within the discriminant analysis framework.
We propose a new compression approach for reducing the number of training samples for linear and quadratic discriminant analysis.
arXiv Detail & Related papers (2020-05-08T05:09:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.