LaTiM: Longitudinal representation learning in continuous-time models to predict disease progression
- URL: http://arxiv.org/abs/2404.07091v1
- Date: Wed, 10 Apr 2024 15:29:29 GMT
- Title: LaTiM: Longitudinal representation learning in continuous-time models to predict disease progression
- Authors: Rachid Zeghlache, Pierre-Henri Conze, Mostafa El Habib Daho, Yihao Li, Hugo Le Boité, Ramin Tadayoni, Pascal Massin, Béatrice Cochener, Alireza Rezaei, Ikram Brahim, Gwenolé Quellec, Mathieu Lamard,
- Abstract summary: This work proposes a novel framework for analyzing disease progression using time-aware neural ordinary differential equations (NODE)
We introduce a "time-aware head" in a framework trained through self-supervised learning (SSL) to leverage temporal information in latent space for data augmentation.
We demonstrate the effectiveness of our strategy for diabetic retinopathy progression prediction using the OPHDIAT database.
- Score: 2.663690023739801
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work proposes a novel framework for analyzing disease progression using time-aware neural ordinary differential equations (NODE). We introduce a "time-aware head" in a framework trained through self-supervised learning (SSL) to leverage temporal information in latent space for data augmentation. This approach effectively integrates NODEs with SSL, offering significant performance improvements compared to traditional methods that lack explicit temporal integration. We demonstrate the effectiveness of our strategy for diabetic retinopathy progression prediction using the OPHDIAT database. Compared to the baseline, all NODE architectures achieve statistically significant improvements in area under the ROC curve (AUC) and Kappa metrics, highlighting the efficacy of pre-training with SSL-inspired approaches. Additionally, our framework promotes stable training for NODEs, a commonly encountered challenge in time-aware modeling.
Related papers
- TE-SSL: Time and Event-aware Self Supervised Learning for Alzheimer's Disease Progression Analysis [6.6584447062231895]
Alzheimer's Dementia (AD) represents one of the most pressing challenges in the field of neurodegenerative disorders.
Recent advancements in deep learning and various representation learning strategies, including self-supervised learning (SSL), have shown significant promise in enhancing medical image analysis.
We propose a novel framework, Time and Even-aware SSL (TE-SSL), which integrates time-to-event and event data as supervisory signals to refine the learning process.
arXiv Detail & Related papers (2024-07-09T13:41:32Z) - Normalization and effective learning rates in reinforcement learning [52.59508428613934]
Normalization layers have recently experienced a renaissance in the deep reinforcement learning and continual learning literature.
We show that normalization brings with it a subtle but important side effect: an equivalence between growth in the norm of the network parameters and decay in the effective learning rate.
We propose to make the learning rate schedule explicit with a simple re- parameterization which we call Normalize-and-Project.
arXiv Detail & Related papers (2024-07-01T20:58:01Z) - Deep Learning to Predict Glaucoma Progression using Structural Changes in the Eye [0.20718016474717196]
Glaucoma is a chronic eye disease characterized by optic neuropathy, leading to irreversible vision loss.
Early detection is crucial to monitor atrophy and develop treatment strategies to prevent further vision impairment.
In this study, we use deep learning models to identify complex disease traits and progression criteria.
arXiv Detail & Related papers (2024-06-09T01:12:41Z) - Longitudinal Self-supervised Learning Using Neural Ordinary Differential
Equation [1.8594165055074698]
In recent years, a novel class of algorithms has emerged with the goal of learning disease progression in a self-supervised manner.
By capturing temporal patterns without external labels or supervision, longitudinal self-supervised learning has become a promising avenue.
This paper aims at providing a better understanding of those core algorithms for learning the disease progression with the mentioned change.
arXiv Detail & Related papers (2023-10-16T14:16:04Z) - LMT: Longitudinal Mixing Training, a Framework to Predict Disease
Progression from a Single Image [1.805673949640389]
We introduce a new way to train time-aware models using $t_mix$, a weighted average time between two consecutive examinations.
We predict whether an eye would develop a severe DR in the following visit using a single image, with an AUC of 0.798 compared to baseline results of 0.641.
arXiv Detail & Related papers (2023-10-16T14:01:20Z) - OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive
Learning [67.07363529640784]
We propose OpenSTL to categorize prevalent approaches into recurrent-based and recurrent-free models.
We conduct standard evaluations on datasets across various domains, including synthetic moving object trajectory, human motion, driving scenes, traffic flow and forecasting weather.
We find that recurrent-free models achieve a good balance between efficiency and performance than recurrent models.
arXiv Detail & Related papers (2023-06-20T03:02:14Z) - Understanding and Improving the Role of Projection Head in
Self-Supervised Learning [77.59320917894043]
Self-supervised learning (SSL) aims to produce useful feature representations without access to human-labeled data annotations.
Current contrastive learning approaches append a parametrized projection head to the end of some backbone network to optimize the InfoNCE objective.
This raises a fundamental question: Why is a learnable projection head required if we are to discard it after training?
arXiv Detail & Related papers (2022-12-22T05:42:54Z) - Benchmarking Self-Supervised Learning on Diverse Pathology Datasets [10.868779327544688]
Self-supervised learning has shown to be an effective method for utilizing unlabeled data.
We execute the largest-scale study of SSL pre-training on pathology image data.
For the first time, we apply SSL to the challenging task of nuclei instance segmentation.
arXiv Detail & Related papers (2022-12-09T06:38:34Z) - LifeLonger: A Benchmark for Continual Disease Classification [59.13735398630546]
We introduce LifeLonger, a benchmark for continual disease classification on the MedMNIST collection.
Task and class incremental learning of diseases address the issue of classifying new samples without re-training the models from scratch.
Cross-domain incremental learning addresses the issue of dealing with datasets originating from different institutions while retaining the previously obtained knowledge.
arXiv Detail & Related papers (2022-04-12T12:25:05Z) - On the Robustness of Pretraining and Self-Supervision for a Deep
Learning-based Analysis of Diabetic Retinopathy [70.71457102672545]
We compare the impact of different training procedures for diabetic retinopathy grading.
We investigate different aspects such as quantitative performance, statistics of the learned feature representations, interpretability and robustness to image distortions.
Our results indicate that models from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions.
arXiv Detail & Related papers (2021-06-25T08:32:45Z) - Influence Estimation and Maximization via Neural Mean-Field Dynamics [60.91291234832546]
We propose a novel learning framework using neural mean-field (NMF) dynamics for inference and estimation problems.
Our framework can simultaneously learn the structure of the diffusion network and the evolution of node infection probabilities.
arXiv Detail & Related papers (2021-06-03T00:02:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.