Variational Deep Survival Machines: Survival Regression with Censored Outcomes
- URL: http://arxiv.org/abs/2404.15595v1
- Date: Wed, 24 Apr 2024 02:16:00 GMT
- Title: Variational Deep Survival Machines: Survival Regression with Censored Outcomes
- Authors: Qinxin Wang, Jiayuan Huang, Junhui Li, Jiaming Liu,
- Abstract summary: Survival regression aims to predict the time when an event of interest will take place, typically a death or a failure.
We present a novel method to predict the survival time by better clustering the survival data and combine primitive distributions.
- Score: 11.82370259688716
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Survival regression aims to predict the time when an event of interest will take place, typically a death or a failure. A fully parametric method [18] is proposed to estimate the survival function as a mixture of individual parametric distributions in the presence of censoring. In this paper, We present a novel method to predict the survival time by better clustering the survival data and combine primitive distributions. We propose two variants of variational auto-encoder (VAE), discrete and continuous, to generate the latent variables for clustering input covariates. The model is trained end to end by jointly optimizing the VAE loss and regression loss. Thorough experiments on dataset SUPPORT and FLCHAIN show that our method can effectively improve the clustering result and reach competitive scores with previous methods. We demonstrate the superior result of our model prediction in the long-term. Our code is available at https://github.com/qinzzz/auton-survival-785.
Related papers
- Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Collaborative Heterogeneous Causal Inference Beyond Meta-analysis [68.4474531911361]
We propose a collaborative inverse propensity score estimator for causal inference with heterogeneous data.
Our method shows significant improvements over the methods based on meta-analysis when heterogeneity increases.
arXiv Detail & Related papers (2024-04-24T09:04:36Z) - TripleSurv: Triplet Time-adaptive Coordinate Loss for Survival Analysis [15.496918127515665]
We propose a time-adaptive coordinate loss function, TripleSurv, to handle the complexities of learning process and exploit valuable survival time values.
Our TripleSurv is evaluated on three real-world survival datasets and a public synthetic dataset.
arXiv Detail & Related papers (2024-01-05T08:37:57Z) - Deep Ensembles Meets Quantile Regression: Uncertainty-aware Imputation
for Time Series [49.992908221544624]
Time series data often exhibit numerous missing values, which is the time series imputation task.
Previous deep learning methods have been shown to be effective for time series imputation.
We propose a non-generative time series imputation method that produces accurate imputations with inherent uncertainty.
arXiv Detail & Related papers (2023-12-03T05:52:30Z) - Mixture of Experts with Uncertainty Voting for Imbalanced Deep
Regression Problems [22.041067758144077]
We propose a mixture-of-experts approach to imbalanced regression problems.
We replace traditional regression losses with negative log-likelihood which also predicts sample-wise aleatoric uncertainty.
We show experimentally that such a loss handles the imbalance better.
arXiv Detail & Related papers (2023-05-24T14:12:21Z) - Learning Survival Distribution with Implicit Survival Function [15.588273962274393]
We propose Implicit Survival Function (ISF) based on Implicit Neural Representation for survival distribution estimation without strong assumptions.
Experimental results show ISF outperforms the state-of-the-art methods in three public datasets.
arXiv Detail & Related papers (2023-05-24T02:51:29Z) - Concordance based Survival Cobra with regression type weak learners [0.0]
We take weak learners as different random survival trees. We propose to maximize concordance in the right-censored set up to find the optimal parameters.
Our proposed formulations use two different norms, say, Max-norm and Frobenius norm, to find a proximity set of predictions from query points in the test dataset.
arXiv Detail & Related papers (2022-09-24T04:10:17Z) - X-model: Improving Data Efficiency in Deep Learning with A Minimax Model [78.55482897452417]
We aim at improving data efficiency for both classification and regression setups in deep learning.
To take the power of both worlds, we propose a novel X-model.
X-model plays a minimax game between the feature extractor and task-specific heads.
arXiv Detail & Related papers (2021-10-09T13:56:48Z) - Time-to-event regression using partially monotonic neural networks [9.224121801193935]
We propose SuMo-net, that uses partially monotonic neural networks to learn a time-to-event distribution.
The method does not make assumptions about the true survival distribution and avoids computationally expensive integration of the hazard function.
arXiv Detail & Related papers (2021-03-26T22:34:57Z) - Evaluating Prediction-Time Batch Normalization for Robustness under
Covariate Shift [81.74795324629712]
We call prediction-time batch normalization, which significantly improves model accuracy and calibration under covariate shift.
We show that prediction-time batch normalization provides complementary benefits to existing state-of-the-art approaches for improving robustness.
The method has mixed results when used alongside pre-training, and does not seem to perform as well under more natural types of dataset shift.
arXiv Detail & Related papers (2020-06-19T05:08:43Z) - Preventing Posterior Collapse with Levenshtein Variational Autoencoder [61.30283661804425]
We propose to replace the evidence lower bound (ELBO) with a new objective which is simple to optimize and prevents posterior collapse.
We show that Levenstein VAE produces more informative latent representations than alternative approaches to preventing posterior collapse.
arXiv Detail & Related papers (2020-04-30T13:27:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.