Deep Learning for Survival Analysis: A Review
- URL: http://arxiv.org/abs/2305.14961v4
- Date: Thu, 22 Feb 2024 08:17:17 GMT
- Title: Deep Learning for Survival Analysis: A Review
- Authors: Simon Wiegrebe, Philipp Kopper, Raphael Sonabend, Bernd Bischl, and
Andreas Bender
- Abstract summary: The influx of deep learning (DL) techniques into the field of survival analysis has led to substantial methodological progress.
We conduct a systematic review of DL-based methods for time-to-event analysis, characterizing them according to both survival- and DL-related attributes.
- Score: 7.016568778869699
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The influx of deep learning (DL) techniques into the field of survival
analysis in recent years has led to substantial methodological progress; for
instance, learning from unstructured or high-dimensional data such as images,
text or omics data. In this work, we conduct a comprehensive systematic review
of DL-based methods for time-to-event analysis, characterizing them according
to both survival- and DL-related attributes. In summary, the reviewed methods
often address only a small subset of tasks relevant to time-to-event data -
e.g., single-risk right-censored data - and neglect to incorporate more complex
settings. Our findings are summarized in an editable, open-source, interactive
table: https://survival-org.github.io/DL4Survival. As this research area is
advancing rapidly, we encourage community contribution in order to keep this
database up to date.
Related papers
- Deep End-to-End Survival Analysis with Temporal Consistency [49.77103348208835]
We present a novel Survival Analysis algorithm designed to efficiently handle large-scale longitudinal data.
A central idea in our method is temporal consistency, a hypothesis that past and future outcomes in the data evolve smoothly over time.
Our framework uniquely incorporates temporal consistency into large datasets by providing a stable training signal.
arXiv Detail & Related papers (2024-10-09T11:37:09Z) - Self-Supervised Learning for Text Recognition: A Critical Survey [11.599791967838481]
Text Recognition (TR) refers to the research area that focuses on retrieving textual information from images.
Self-Supervised Learning (SSL) has gained attention by utilizing large datasets of unlabeled data to train Deep Neural Networks (DNN)
This paper seeks to consolidate the use of SSL in the field of TR, offering a critical and comprehensive overview of the current state of the art.
arXiv Detail & Related papers (2024-07-29T11:11:17Z) - Large-Scale Dataset Pruning in Adversarial Training through Data Importance Extrapolation [1.3124513975412255]
We propose a new data pruning strategy based on extrapolating data importance scores from a small set of data to a larger set.
In an empirical evaluation, we demonstrate that extrapolation-based pruning can efficiently reduce dataset size while maintaining robustness.
arXiv Detail & Related papers (2024-06-19T07:23:51Z) - Label-Efficient Deep Learning in Medical Image Analysis: Challenges and
Future Directions [10.502964056448283]
Training models in medical imaging analysis typically require expensive and time-consuming collection of labeled data.
We extensively investigated over 300 recent papers to provide a comprehensive overview of progress on label-efficient learning strategies in MIA.
Specifically, we provide an in-depth investigation, covering not only canonical semi-supervised, self-supervised, and multi-instance learning schemes, but also recently emerged active and annotation-efficient learning strategies.
arXiv Detail & Related papers (2023-03-22T11:51:49Z) - A Survey of Learning on Small Data: Generalization, Optimization, and
Challenge [101.27154181792567]
Learning on small data that approximates the generalization ability of big data is one of the ultimate purposes of AI.
This survey follows the active sampling theory under a PAC framework to analyze the generalization error and label complexity of learning on small data.
Multiple data applications that may benefit from efficient small data representation are surveyed.
arXiv Detail & Related papers (2022-07-29T02:34:19Z) - Deeply-Learned Generalized Linear Models with Missing Data [6.302686933168439]
We provide a formal treatment of missing data in the context of deeply learned generalized linear models.
We propose a new architecture, textitdlglm, that is able to flexibly account for both ignorable and non-ignorable patterns of missingness.
We conclude with a case study of a Bank Marketing dataset from the UCI Machine Learning Repository.
arXiv Detail & Related papers (2022-07-18T20:00:13Z) - Learning from Temporal Spatial Cubism for Cross-Dataset Skeleton-based
Action Recognition [88.34182299496074]
Action labels are only available on a source dataset, but unavailable on a target dataset in the training stage.
We utilize a self-supervision scheme to reduce the domain shift between two skeleton-based action datasets.
By segmenting and permuting temporal segments or human body parts, we design two self-supervised learning classification tasks.
arXiv Detail & Related papers (2022-07-17T07:05:39Z) - Sample-Efficient Reinforcement Learning in the Presence of Exogenous
Information [77.19830787312743]
In real-world reinforcement learning applications the learner's observation space is ubiquitously high-dimensional with both relevant and irrelevant information about the task at hand.
We introduce a new problem setting for reinforcement learning, the Exogenous Decision Process (ExoMDP), in which the state space admits an (unknown) factorization into a small controllable component and a large irrelevant component.
We provide a new algorithm, ExoRL, which learns a near-optimal policy with sample complexity in the size of the endogenous component.
arXiv Detail & Related papers (2022-06-09T05:19:32Z) - LifeLonger: A Benchmark for Continual Disease Classification [59.13735398630546]
We introduce LifeLonger, a benchmark for continual disease classification on the MedMNIST collection.
Task and class incremental learning of diseases address the issue of classifying new samples without re-training the models from scratch.
Cross-domain incremental learning addresses the issue of dealing with datasets originating from different institutions while retaining the previously obtained knowledge.
arXiv Detail & Related papers (2022-04-12T12:25:05Z) - Deep Learning Schema-based Event Extraction: Literature Review and
Current Trends [60.29289298349322]
Event extraction technology based on deep learning has become a research hotspot.
This paper fills the gap by reviewing the state-of-the-art approaches, focusing on deep learning-based models.
arXiv Detail & Related papers (2021-07-05T16:32:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.