Deep dynamic modeling with just two time points: Can we still allow for
individual trajectories?
- URL: http://arxiv.org/abs/2012.00634v1
- Date: Tue, 1 Dec 2020 16:58:02 GMT
- Title: Deep dynamic modeling with just two time points: Can we still allow for
individual trajectories?
- Authors: Maren Hackenberg, Philipp Harms, Thorsten Schmidt, Harald Binder
- Abstract summary: In epidemiological cohort studies and clinical registries Longitudinal biomedical data are often characterized by a sparse time grid.
Inspired by recent advances that allow to combine deep learning with dynamic modeling, we investigate whether such approaches can be useful for uncovering complex structure.
We show that such dynamic deep learning approaches can be useful even in extreme small data settings, but need to be carefully adapted.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Longitudinal biomedical data are often characterized by a sparse time grid
and individual-specific development patterns. Specifically, in epidemiological
cohort studies and clinical registries we are facing the question of what can
be learned from the data in an early phase of the study, when only a baseline
characterization and one follow-up measurement are available. Inspired by
recent advances that allow to combine deep learning with dynamic modeling, we
investigate whether such approaches can be useful for uncovering complex
structure, in particular for an extreme small data setting with only two
observations time points for each individual. Irregular spacing in time could
then be used to gain more information on individual dynamics by leveraging
similarity of individuals. We provide a brief overview of how variational
autoencoders (VAEs), as a deep learning approach, can be linked to ordinary
differential equations (ODEs) for dynamic modeling, and then specifically
investigate the feasibility of such an approach that infers individual-specific
latent trajectories by including regularity assumptions and individuals'
similarity. We also provide a description of this deep learning approach as a
filtering task to give a statistical perspective. Using simulated data, we show
to what extent the approach can recover individual trajectories from ODE
systems with two and four unknown parameters and infer groups of individuals
with similar trajectories, and where it breaks down. The results show that such
dynamic deep learning approaches can be useful even in extreme small data
settings, but need to be carefully adapted.
Related papers
- Gradient-Based Feature Learning under Structured Data [57.76552698981579]
In the anisotropic setting, the commonly used spherical gradient dynamics may fail to recover the true direction.
We show that appropriate weight normalization that is reminiscent of batch normalization can alleviate this issue.
In particular, under the spiked model with a suitably large spike, the sample complexity of gradient-based training can be made independent of the information exponent.
arXiv Detail & Related papers (2023-09-07T16:55:50Z) - Learning Latent Dynamics via Invariant Decomposition and
(Spatio-)Temporal Transformers [0.6767885381740952]
We propose a method for learning dynamical systems from high-dimensional empirical data.
We focus on the setting in which data are available from multiple different instances of a system.
We study behaviour through simple theoretical analyses and extensive experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2023-06-21T07:52:07Z) - T-Phenotype: Discovering Phenotypes of Predictive Temporal Patterns in
Disease Progression [82.85825388788567]
We develop a novel temporal clustering method, T-Phenotype, to discover phenotypes of predictive temporal patterns from labeled time-series data.
We show that T-Phenotype achieves the best phenotype discovery performance over all the evaluated baselines.
arXiv Detail & Related papers (2023-02-24T13:30:35Z) - Integrating Multimodal Data for Joint Generative Modeling of Complex Dynamics [6.848555909346641]
We provide an efficient framework to combine various sources of information for optimal reconstruction.
Our framework is fully textitgenerative, producing, after training, trajectories with the same geometrical and temporal structure as those of the ground truth system.
arXiv Detail & Related papers (2022-12-15T15:21:28Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - Equivariance Allows Handling Multiple Nuisance Variables When Analyzing
Pooled Neuroimaging Datasets [53.34152466646884]
In this paper, we show how bringing recent results on equivariant representation learning instantiated on structured spaces together with simple use of classical results on causal inference provides an effective practical solution.
We demonstrate how our model allows dealing with more than one nuisance variable under some assumptions and can enable analysis of pooled scientific datasets in scenarios that would otherwise entail removing a large portion of the samples.
arXiv Detail & Related papers (2022-03-29T04:54:06Z) - Deep learning and differential equations for modeling changes in
individual-level latent dynamics between observation periods [0.0]
We propose an extension where different sets of differential equation parameters are allowed for observation sub-periods.
We derive prediction targets from individual dynamic models of resilience in the application.
Our approach is seen to successfully identify individual-level parameters of dynamic models that allows us to stably select predictors.
arXiv Detail & Related papers (2022-02-15T13:53:42Z) - Learning summary features of time series for likelihood free inference [93.08098361687722]
We present a data-driven strategy for automatically learning summary features from time series data.
Our results indicate that learning summary features from data can compete and even outperform LFI methods based on hand-crafted values.
arXiv Detail & Related papers (2020-12-04T19:21:37Z) - Learning Realistic Patterns from Unrealistic Stimuli: Generalization and
Data Anonymization [0.5091527753265949]
This work investigates a simple yet unconventional approach for anonymized data synthesis to enable third parties to benefit from such private data.
We use sleep monitoring data from both an open and a large closed clinical study and evaluate whether (1) end-users can create and successfully use customized classification models for sleep apnea detection, and (2) the identity of participants in the study is protected.
arXiv Detail & Related papers (2020-09-21T16:31:21Z) - Penalized Estimation and Forecasting of Multiple Subject Intensive
Longitudinal Data [7.780531445879182]
We present a novel modeling framework that addresses a number of topical challenges and open questions in the psychological literature on modeling dynamic processes.
How can we model and forecast ILD when the length of individual time series and the number of variables collected are roughly equivalent?
Second, how can we best take advantage of the cross-sectional (between-person) information inherent to most ILD scenarios while acknowledging individuals differ both quantitatively and qualitatively?
arXiv Detail & Related papers (2020-07-09T20:34:23Z) - Connecting the Dots: Multivariate Time Series Forecasting with Graph
Neural Networks [91.65637773358347]
We propose a general graph neural network framework designed specifically for multivariate time series data.
Our approach automatically extracts the uni-directed relations among variables through a graph learning module.
Our proposed model outperforms the state-of-the-art baseline methods on 3 of 4 benchmark datasets.
arXiv Detail & Related papers (2020-05-24T04:02:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.