A copula-based boosting model for time-to-event prediction with
dependent censoring
- URL: http://arxiv.org/abs/2210.04869v1
- Date: Mon, 10 Oct 2022 17:38:00 GMT
- Title: A copula-based boosting model for time-to-event prediction with
dependent censoring
- Authors: Alise Danielle Midtfjord and Riccardo De Bin and Arne Bang Huseby
- Abstract summary: This paper introduces Clayton-boost, a boosting approach built upon the accelerated failure time model.
It uses a Clayton copula to handle the dependency between the event and censoring distributions.
It shows a strong ability to remove prediction bias at the presence of dependent censoring.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A characteristic feature of time-to-event data analysis is possible censoring
of the event time. Most of the statistical learning methods for handling
censored data are limited by the assumption of independent censoring, even if
this can lead to biased predictions when the assumption does not hold. This
paper introduces Clayton-boost, a boosting approach built upon the accelerated
failure time model, which uses a Clayton copula to handle the dependency
between the event and censoring distributions. By taking advantage of a copula,
the independent censoring assumption is not needed any more. During comparisons
with commonly used methods, Clayton-boost shows a strong ability to remove
prediction bias at the presence of dependent censoring and outperforms the
comparing methods either if the dependency strength or percentage censoring are
considerable. The encouraging performance of Clayton-boost shows that there is
indeed reasons to be critical about the independent censoring assumption, and
that real-world data could highly benefit from modelling the potential
dependency.
Related papers
- Practical Evaluation of Copula-based Survival Metrics: Beyond the Independent Censoring Assumption [4.795126873893598]
We propose three copula-based metrics to evaluate survival models in the presence of dependent censoring.
Our empirical analyses in synthetic and semi-synthetic datasets show that our metrics can give error estimates that are closer to the true error.
arXiv Detail & Related papers (2025-02-26T10:28:44Z) - Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization [60.176008034221404]
Direct Preference Optimization (DPO) and its variants are increasingly used for aligning language models with human preferences.
Prior work has observed that the likelihood of preferred responses often decreases during training.
We demonstrate that likelihood displacement can be catastrophic, shifting probability mass from preferred responses to responses with an opposite meaning.
arXiv Detail & Related papers (2024-10-11T14:22:44Z) - Mitigating LLM Hallucinations via Conformal Abstention [70.83870602967625]
We develop a principled procedure for determining when a large language model should abstain from responding in a general domain.
We leverage conformal prediction techniques to develop an abstention procedure that benefits from rigorous theoretical guarantees on the hallucination rate (error rate)
Experimentally, our resulting conformal abstention method reliably bounds the hallucination rate on various closed-book, open-domain generative question answering datasets.
arXiv Detail & Related papers (2024-04-04T11:32:03Z) - Deep Copula-Based Survival Analysis for Dependent Censoring with Identifiability Guarantees [14.251687262492377]
Censoring is the central problem in survival analysis where either the time-to-event (for instance, death) or the time-tocensoring is observed for each sample.
We propose a flexible deep learning-based survival analysis method that simultaneously accommodate for dependent censoring and eliminates the requirement for specifying the ground truth copula.
arXiv Detail & Related papers (2023-12-24T23:34:01Z) - Accurate Use of Label Dependency in Multi-Label Text Classification
Through the Lens of Causality [25.36416774024584]
Multi-Label Text Classification aims to assign the most relevant labels to each given text.
Label dependency may cause the model to suffer from unwanted prediction bias.
We propose a CounterFactual Text (CFTC) to eliminate the correlation bias, and make causality-based predictions.
arXiv Detail & Related papers (2023-10-11T15:28:44Z) - CenTime: Event-Conditional Modelling of Censoring in Survival Analysis [49.44664144472712]
We introduce CenTime, a novel approach to survival analysis that directly estimates the time to event.
Our method features an innovative event-conditional censoring mechanism that performs robustly even when uncensored data is scarce.
Our results indicate that CenTime offers state-of-the-art performance in predicting time-to-death while maintaining comparable ranking performance.
arXiv Detail & Related papers (2023-09-07T17:07:33Z) - Self-supervised debiasing using low rank regularization [59.84695042540525]
Spurious correlations can cause strong biases in deep neural networks, impairing generalization ability.
We propose a self-supervised debiasing framework potentially compatible with unlabeled samples.
Remarkably, the proposed debiasing framework significantly improves the generalization performance of self-supervised learning baselines.
arXiv Detail & Related papers (2022-10-11T08:26:19Z) - Adversarial Robustness through the Lens of Causality [105.51753064807014]
adversarial vulnerability of deep neural networks has attracted significant attention in machine learning.
We propose to incorporate causality into mitigating adversarial vulnerability.
Our method can be seen as the first attempt to leverage causality for mitigating adversarial vulnerability.
arXiv Detail & Related papers (2021-06-11T06:55:02Z) - Conformalized Survival Analysis [6.92027612631023]
Existing survival analysis techniques heavily rely on strong modelling assumptions.
We develop an inferential method based on ideas from conformal prediction.
The validity and efficiency of our procedure are demonstrated on synthetic data and real COVID-19 data from the UK Biobank.
arXiv Detail & Related papers (2021-03-17T16:32:26Z) - Kernelized Stein Discrepancy Tests of Goodness-of-fit for Time-to-Event
Data [24.442094864838225]
We propose a collection of kernelized Stein discrepancy tests for time-to-event data.
Our experimental results show that our proposed methods perform better than existing tests.
arXiv Detail & Related papers (2020-08-19T12:27:43Z) - Censored Quantile Regression Forest [81.9098291337097]
We develop a new estimating equation that adapts to censoring and leads to quantile score whenever the data do not exhibit censoring.
The proposed procedure named it censored quantile regression forest, allows us to estimate quantiles of time-to-event without any parametric modeling assumption.
arXiv Detail & Related papers (2020-01-08T23:20:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.