LMT: Longitudinal Mixing Training, a Framework to Predict Disease
Progression from a Single Image
- URL: http://arxiv.org/abs/2310.10420v1
- Date: Mon, 16 Oct 2023 14:01:20 GMT
- Title: LMT: Longitudinal Mixing Training, a Framework to Predict Disease
Progression from a Single Image
- Authors: Rachid Zeghlache, Pierre-Henri Conze, Mostafa El Habib Daho, Yihao Li,
Hugo Le boite, Ramin Tadayoni, Pascal Massin, B\'eatrice Cochener, Ikram
Brahim, Gwenol\'e Quellec, and Mathieu Lamard
- Abstract summary: We introduce a new way to train time-aware models using $t_mix$, a weighted average time between two consecutive examinations.
We predict whether an eye would develop a severe DR in the following visit using a single image, with an AUC of 0.798 compared to baseline results of 0.641.
- Score: 1.805673949640389
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Longitudinal imaging is able to capture both static anatomical structures and
dynamic changes in disease progression toward earlier and better
patient-specific pathology management. However, conventional approaches rarely
take advantage of longitudinal information for detection and prediction
purposes, especially for Diabetic Retinopathy (DR). In the past years, Mix-up
training and pretext tasks with longitudinal context have effectively enhanced
DR classification results and captured disease progression. In the meantime, a
novel type of neural network named Neural Ordinary Differential Equation (NODE)
has been proposed for solving ordinary differential equations, with a neural
network treated as a black box. By definition, NODE is well suited for solving
time-related problems. In this paper, we propose to combine these three aspects
to detect and predict DR progression. Our framework, Longitudinal Mixing
Training (LMT), can be considered both as a regularizer and as a pretext task
that encodes the disease progression in the latent space. Additionally, we
evaluate the trained model weights on a downstream task with a longitudinal
context using standard and longitudinal pretext tasks. We introduce a new way
to train time-aware models using $t_{mix}$, a weighted average time between two
consecutive examinations. We compare our approach to standard mixing training
on DR classification using OPHDIAT a longitudinal retinal Color Fundus
Photographs (CFP) dataset. We were able to predict whether an eye would develop
a severe DR in the following visit using a single image, with an AUC of 0.798
compared to baseline results of 0.641. Our results indicate that our
longitudinal pretext task can learn the progression of DR disease and that
introducing $t_{mix}$ augmentation is beneficial for time-aware models.
Related papers
- Trajectory Flow Matching with Applications to Clinical Time Series Modeling [77.58277281319253]
Trajectory Flow Matching (TFM) trains a Neural SDE in a simulation-free manner, bypassing backpropagation through the dynamics.
We demonstrate improved performance on three clinical time series datasets in terms of absolute performance and uncertainty prediction.
arXiv Detail & Related papers (2024-10-28T15:54:50Z) - 3DTINC: Time-Equivariant Non-Contrastive Learning for Predicting Disease Progression from Longitudinal OCTs [8.502838668378432]
We propose a new longitudinal self-supervised learning method, 3DTINC, based on non-contrastive learning.
It is designed to learn perturbation-invariant features for 3D optical coherence tomography ( OCT) volumes, using augmentations specifically designed for OCT.
Our experiments show that this temporal information is crucial for predicting progression of retinal diseases, such as age-related macular degeneration (AMD)
arXiv Detail & Related papers (2023-12-28T11:47:12Z) - tdCoxSNN: Time-Dependent Cox Survival Neural Network for Continuous-time
Dynamic Prediction [19.38247205641199]
We propose a time-dependent Cox survival neural network (tdCoxSNN) to predict its progression using longitudinal fundus images.
We evaluate and compare our proposed method with joint modeling and landmarking approaches through extensive simulations.
arXiv Detail & Related papers (2023-07-12T03:03:40Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - Detection of diabetic retinopathy using longitudinal self-supervised learning [0.57961065199507]
We investigate the benefit of exploiting self-supervised learning with a longitudinal nature for diabetic retinopathy diagnosis purposes.
We compare different longitudinal self-supervised learning (LSSL) methods to model the disease progression from longitudinal retinal color fundus photographs.
Results achieve an AUC of 0.875 for the baseline (model trained from scratch) and an AUC of 0.96 with a p-value 2.2e-16 on early fusion using a simple ResNet alike architecture with frozen LSSL weights.
arXiv Detail & Related papers (2022-09-02T09:50:31Z) - Metadata-enhanced contrastive learning from retinal optical coherence tomography images [7.932410831191909]
We extend conventional contrastive frameworks with a novel metadata-enhanced strategy.
Our approach employs widely available patient metadata to approximate the true set of inter-image contrastive relationships.
Our approach outperforms both standard contrastive methods and a retinal image foundation model in five out of six image-level downstream tasks.
arXiv Detail & Related papers (2022-08-04T08:53:15Z) - SurvLatent ODE : A Neural ODE based time-to-event model with competing
risks for longitudinal data improves cancer-associated Deep Vein Thrombosis
(DVT) prediction [68.8204255655161]
We propose a generative time-to-event model, SurvLatent ODE, which parameterizes a latent representation under irregularly sampled data.
Our model then utilizes the latent representation to flexibly estimate survival times for multiple competing events without specifying shapes of event-specific hazard function.
SurvLatent ODE outperforms the current clinical standard Khorana Risk scores for stratifying DVT risk groups.
arXiv Detail & Related papers (2022-04-20T17:28:08Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - DeepRite: Deep Recurrent Inverse TreatmEnt Weighting for Adjusting
Time-varying Confounding in Modern Longitudinal Observational Data [68.29870617697532]
We propose Deep Recurrent Inverse TreatmEnt weighting (DeepRite) for time-varying confounding in longitudinal data.
DeepRite is shown to recover the ground truth from synthetic data, and estimate unbiased treatment effects from real data.
arXiv Detail & Related papers (2020-10-28T15:05:08Z) - Development and Validation of a Novel Prognostic Model for Predicting
AMD Progression Using Longitudinal Fundus Images [6.258161719849178]
We propose a novel deep learning method to predict the progression of diseases using longitudinal imaging data with uneven time intervals.
We demonstrate our method on a longitudinal dataset of color fundus images from 4903 eyes with age-related macular degeneration (AMD)
Our method attains a testing sensitivity of 0.878, a specificity of 0.887, and an area under the receiver operating characteristic of 0.950.
arXiv Detail & Related papers (2020-07-10T00:33:19Z) - 1-D Convlutional Neural Networks for the Analysis of Pupil Size
Variations in Scotopic Conditions [79.71065005161566]
1-D convolutional neural network models are trained for classification of short-range sequences.
Model provides prediction with high average accuracy on a hold out test set.
arXiv Detail & Related papers (2020-02-06T17:25:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.