Continuous Forecasting via Neural Eigen Decomposition of Stochastic
Dynamics
- URL: http://arxiv.org/abs/2202.00117v2
- Date: Wed, 2 Feb 2022 13:16:48 GMT
- Title: Continuous Forecasting via Neural Eigen Decomposition of Stochastic
Dynamics
- Authors: Stav Belogolovsky, Ido Greenberg, Danny Eitan and Shie Mannor
- Abstract summary: We introduce the Neural Eigen-SDE (NESDE) algorithm for sequential prediction with sparse observations and adaptive dynamics.
NESDE applies eigen-decomposition to the dynamics model to allow efficient frequent predictions given sparse observations.
We are the first to provide a patient-adapted prediction for blood coagulation following Heparin dosing in the MIMIC-IV dataset.
- Score: 47.82509795873254
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Motivated by a real-world problem of blood coagulation control in
Heparin-treated patients, we use Stochastic Differential Equations (SDEs) to
formulate a new class of sequential prediction problems -- with an unknown
latent space, unknown non-linear dynamics, and irregular sparse observations.
We introduce the Neural Eigen-SDE (NESDE) algorithm for sequential prediction
with sparse observations and adaptive dynamics. NESDE applies
eigen-decomposition to the dynamics model to allow efficient frequent
predictions given sparse observations. In addition, NESDE uses a learning
mechanism for adaptive dynamics model, which handles changes in the dynamics
both between sequences and within sequences. We demonstrate the accuracy and
efficacy of NESDE for both synthetic problems and real-world data. In
particular, to the best of our knowledge, we are the first to provide a
patient-adapted prediction for blood coagulation following Heparin dosing in
the MIMIC-IV dataset. Finally, we publish a simulated gym environment based on
our prediction model, for experimentation in algorithms for blood coagulation
control.
Related papers
- Combined Optimization of Dynamics and Assimilation with End-to-End Learning on Sparse Observations [1.492574139257933]
CODA is an end-to-end optimization scheme for jointly learning dynamics and DA directly from sparse and noisy observations.
We introduce a novel learning objective that combines unrolled auto-regressive dynamics with the data- and self-consistency terms of weak-constraint 4Dvar DA.
arXiv Detail & Related papers (2024-09-11T09:36:15Z) - Individualized Dosing Dynamics via Neural Eigen Decomposition [51.62933814971523]
We introduce the Neural Eigen Differential Equation algorithm (NESDE)
NESDE provides individualized modeling, tunable generalization to new treatment policies, and fast, continuous, closed-form prediction.
We demonstrate the robustness of NESDE in both synthetic and real medical problems, and use the learned dynamics to publish simulated medical gym environments.
arXiv Detail & Related papers (2023-06-24T17:01:51Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Neural Continuous-Discrete State Space Models for Irregularly-Sampled
Time Series [18.885471782270375]
NCDSSM employs auxiliary variables to disentangle recognition from dynamics, thus requiring amortized inference only for the auxiliary variables.
We propose three flexible parameterizations of the latent dynamics and an efficient training objective that marginalizes the dynamic states during inference.
Empirical results on multiple benchmark datasets show improved imputation and forecasting performance of NCDSSM over existing models.
arXiv Detail & Related papers (2023-01-26T18:45:04Z) - Sparsity in Continuous-Depth Neural Networks [2.969794498016257]
We study the influence of weight and feature sparsity on forecasting and on identifying the underlying dynamical laws.
We curate real-world datasets consisting of human motion capture and human hematopoiesis single-cell RNA-seq data.
arXiv Detail & Related papers (2022-10-26T12:48:12Z) - Continuous-Time Modeling of Counterfactual Outcomes Using Neural
Controlled Differential Equations [84.42837346400151]
Estimating counterfactual outcomes over time has the potential to unlock personalized healthcare.
Existing causal inference approaches consider regular, discrete-time intervals between observations and treatment decisions.
We propose a controllable simulation environment based on a model of tumor growth for a range of scenarios.
arXiv Detail & Related papers (2022-06-16T17:15:15Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - Disentangled Generative Models for Robust Prediction of System Dynamics [2.6424064030995957]
In this work, we treat the domain parameters of dynamical systems as factors of variation of the data generating process.
By leveraging ideas from supervised disentanglement and causal factorization, we aim to separate the domain parameters from the dynamics in the latent space of generative models.
Results indicate that disentangled VAEs adapt better to domain parameters spaces that were not present in the training data.
arXiv Detail & Related papers (2021-08-26T09:58:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.