Longitudinal Self-supervised Learning Using Neural Ordinary Differential
Equation
- URL: http://arxiv.org/abs/2310.10431v1
- Date: Mon, 16 Oct 2023 14:16:04 GMT
- Title: Longitudinal Self-supervised Learning Using Neural Ordinary Differential
Equation
- Authors: Rachid Zeghlache, Pierre-Henri Conze, Mostafa El Habib Daho, Yihao Li,
Hugo Le Boit\'e, Ramin Tadayoni, Pascal Massin, B\'eatrice Cochener, Ikram
Brahim, Gwenol\'e Quellec, Mathieu Lamard
- Abstract summary: In recent years, a novel class of algorithms has emerged with the goal of learning disease progression in a self-supervised manner.
By capturing temporal patterns without external labels or supervision, longitudinal self-supervised learning has become a promising avenue.
This paper aims at providing a better understanding of those core algorithms for learning the disease progression with the mentioned change.
- Score: 1.8594165055074698
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Longitudinal analysis in medical imaging is crucial to investigate the
progressive changes in anatomical structures or disease progression over time.
In recent years, a novel class of algorithms has emerged with the goal of
learning disease progression in a self-supervised manner, using either pairs of
consecutive images or time series of images. By capturing temporal patterns
without external labels or supervision, longitudinal self-supervised learning
(LSSL) has become a promising avenue. To better understand this core method, we
explore in this paper the LSSL algorithm under different scenarios. The
original LSSL is embedded in an auto-encoder (AE) structure. However,
conventional self-supervised strategies are usually implemented in a
Siamese-like manner. Therefore, (as a first novelty) in this study, we explore
the use of Siamese-like LSSL. Another new core framework named neural ordinary
differential equation (NODE). NODE is a neural network architecture that learns
the dynamics of ordinary differential equations (ODE) through the use of neural
networks. Many temporal systems can be described by ODE, including modeling
disease progression. We believe that there is an interesting connection to make
between LSSL and NODE. This paper aims at providing a better understanding of
those core algorithms for learning the disease progression with the mentioned
change. In our different experiments, we employ a longitudinal dataset, named
OPHDIAT, targeting diabetic retinopathy (DR) follow-up. Our results demonstrate
the application of LSSL without including a reconstruction term, as well as the
potential of incorporating NODE in conjunction with LSSL.
Related papers
- LaTiM: Longitudinal representation learning in continuous-time models to predict disease progression [2.663690023739801]
This work proposes a novel framework for analyzing disease progression using time-aware neural ordinary differential equations (NODE)
We introduce a "time-aware head" in a framework trained through self-supervised learning (SSL) to leverage temporal information in latent space for data augmentation.
We demonstrate the effectiveness of our strategy for diabetic retinopathy progression prediction using the OPHDIAT database.
arXiv Detail & Related papers (2024-04-10T15:29:29Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - Temporal Feature Alignment in Contrastive Self-Supervised Learning for
Human Activity Recognition [2.2082422928825136]
Self-supervised learning is typically used to learn deep feature representations from unlabeled data.
We propose integrating a dynamic time warping algorithm in a latent space to force features to be aligned in a temporal dimension.
The proposed approach has a great potential in learning robust feature representations compared to the recent SSL baselines.
arXiv Detail & Related papers (2022-10-07T07:51:01Z) - Neural Laplace: Learning diverse classes of differential equations in
the Laplace domain [86.52703093858631]
We propose a unified framework for learning diverse classes of differential equations (DEs) including all the aforementioned ones.
Instead of modelling the dynamics in the time domain, we model it in the Laplace domain, where the history-dependencies and discontinuities in time can be represented as summations of complex exponentials.
In the experiments, Neural Laplace shows superior performance in modelling and extrapolating the trajectories of diverse classes of DEs.
arXiv Detail & Related papers (2022-06-10T02:14:59Z) - On the balance between the training time and interpretability of neural
ODE for time series modelling [77.34726150561087]
The paper shows that modern neural ODE cannot be reduced to simpler models for time-series modelling applications.
The complexity of neural ODE is compared to or exceeds the conventional time-series modelling tools.
We propose a new view on time-series modelling using combined neural networks and an ODE system approach.
arXiv Detail & Related papers (2022-06-07T13:49:40Z) - Combining Recurrent, Convolutional, and Continuous-time Models with
Linear State-Space Layers [21.09321438439848]
We introduce a simple sequence model inspired by control systems that generalize.
We show that LSSL models are closely related to the three aforementioned families of models and inherit their strengths.
For example, they generalize convolutions to continuous-time, explain common RNN-1s, and share features of NDEs such as time-scale adaptation.
arXiv Detail & Related papers (2021-10-26T19:44:53Z) - Differentiable Multiple Shooting Layers [18.37758865401204]
Multiple Shooting Layers (MSLs) seek solutions of initial value problems via parallelizable root-finding algorithms.
We develop the algorithmic framework of MSLs, analyzing the different choices of solution methods from a theoretical and computational perspective.
We investigate the speedups obtained through application of MSL inference in neural controlled differential equations (Neural CDEs) for time series classification of medical data.
arXiv Detail & Related papers (2021-06-07T18:05:44Z) - Self-Supervised Learning of Graph Neural Networks: A Unified Review [50.71341657322391]
Self-supervised learning is emerging as a new paradigm for making use of large amounts of unlabeled samples.
We provide a unified review of different ways of training graph neural networks (GNNs) using SSL.
Our treatment of SSL methods for GNNs sheds light on the similarities and differences of various methods, setting the stage for developing new methods and algorithms.
arXiv Detail & Related papers (2021-02-22T03:43:45Z) - Understanding Self-supervised Learning with Dual Deep Networks [74.92916579635336]
We propose a novel framework to understand contrastive self-supervised learning (SSL) methods that employ dual pairs of deep ReLU networks.
We prove that in each SGD update of SimCLR with various loss functions, the weights at each layer are updated by a emphcovariance operator.
To further study what role the covariance operator plays and which features are learned in such a process, we model data generation and augmentation processes through a emphhierarchical latent tree model (HLTM)
arXiv Detail & Related papers (2020-10-01T17:51:49Z) - Longitudinal Self-Supervised Learning [13.094393751939837]
Ground-truth labels are often missing or expensive to obtain in neuroscience.
We propose a new definition of disentanglement by formulating a multivariate mapping between factors associated with an MRI and a latent image representation.
We implement this model, named Longitudinal Self-Supervised Learning (LSSL), via a standard autoencoding structure with a cosine loss to disentangle brain age from the image representation.
arXiv Detail & Related papers (2020-06-12T03:35:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.