Cubature Kalman Filter Based Training of Hybrid Differential Equation
Recurrent Neural Network Physiological Dynamic Models
- URL: http://arxiv.org/abs/2110.06089v1
- Date: Tue, 12 Oct 2021 15:38:13 GMT
- Title: Cubature Kalman Filter Based Training of Hybrid Differential Equation
Recurrent Neural Network Physiological Dynamic Models
- Authors: Ahmet Demirkaya, Tales Imbiriba, Kyle Lockwood, Sumientra Rampersad,
Elie Alhajjar, Giovanna Guidoboni, Zachary Danziger, Deniz Erdogmus
- Abstract summary: We show how we can approximate missing ordinary differential equations with known ODEs using a neural network approximation.
Results indicate that this RBSE approach to training the NN parameters yields better outcomes (measurement/state estimation accuracy) than training the neural network with backpropagation.
- Score: 13.637931956861758
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modeling biological dynamical systems is challenging due to the
interdependence of different system components, some of which are not fully
understood. To fill existing gaps in our ability to mechanistically model
physiological systems, we propose to combine neural networks with physics-based
models. Specifically, we demonstrate how we can approximate missing ordinary
differential equations (ODEs) coupled with known ODEs using Bayesian filtering
techniques to train the model parameters and simultaneously estimate dynamic
state variables. As a study case we leverage a well-understood model for blood
circulation in the human retina and replace one of its core ODEs with a neural
network approximation, representing the case where we have incomplete knowledge
of the physiological state dynamics. Results demonstrate that state dynamics
corresponding to the missing ODEs can be approximated well using a neural
network trained using a recursive Bayesian filtering approach in a fashion
coupled with the known state dynamic differential equations. This demonstrates
that dynamics and impact of missing state variables can be captured through
joint state estimation and model parameter estimation within a recursive
Bayesian state estimation (RBSE) framework. Results also indicate that this
RBSE approach to training the NN parameters yields better outcomes
(measurement/state estimation accuracy) than training the neural network with
backpropagation through time in the same setting.
Related papers
- Neural Port-Hamiltonian Differential Algebraic Equations for Compositional Learning of Electrical Networks [20.12750360095627]
We develop compositional learning algorithms for coupled dynamical systems.
We use neural networks to parametrize unknown terms in differential and algebraic components of a port-Hamiltonian DAE.
We train individual N-PHDAE models for separate grid components, before coupling them to accurately predict the behavior of larger-scale networks.
arXiv Detail & Related papers (2024-12-15T15:13:11Z) - Recurrent convolutional neural networks for non-adiabatic dynamics of quantum-classical systems [1.2972104025246092]
We present a RNN model based on convolutional neural networks for modeling the nonlinear non-adiabatic dynamics of hybrid quantum-classical systems.
validation studies show that the trained PARC model could reproduce the space-time evolution of a one-dimensional semi-classical Holstein model.
arXiv Detail & Related papers (2024-12-09T16:23:25Z) - Generative Modeling of Neural Dynamics via Latent Stochastic Differential Equations [1.5467259918426441]
We propose a framework for developing computational models of biological neural systems.
We employ a system of coupled differential equations with differentiable drift and diffusion functions.
We show that these hybrid models achieve competitive performance in predicting stimulus-evoked neural and behavioral responses.
arXiv Detail & Related papers (2024-12-01T09:36:03Z) - On the Trade-off Between Efficiency and Precision of Neural Abstraction [62.046646433536104]
Neural abstractions have been recently introduced as formal approximations of complex, nonlinear dynamical models.
We employ formal inductive synthesis procedures to generate neural abstractions that result in dynamical models with these semantics.
arXiv Detail & Related papers (2023-07-28T13:22:32Z) - Analysis of Numerical Integration in RNN-Based Residuals for Fault
Diagnosis of Dynamic Systems [0.6999740786886536]
The paper includes a case study of a heavy-duty truck's after-treatment system to highlight the potential of these techniques for improving fault diagnosis performance.
Data-driven modeling and machine learning are widely used to model the behavior of dynamic systems.
arXiv Detail & Related papers (2023-05-08T12:48:18Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Parameter Estimation with Dense and Convolutional Neural Networks
Applied to the FitzHugh-Nagumo ODE [0.0]
We present deep neural networks using dense and convolutional layers to solve an inverse problem, where we seek to estimate parameters of a Fitz-Nagumo model.
We demonstrate that deep neural networks have the potential to estimate parameters in dynamical models and processes, and they are capable of predicting parameters accurately for the framework.
arXiv Detail & Related papers (2020-12-12T01:20:42Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z) - Stochasticity in Neural ODEs: An Empirical Study [68.8204255655161]
Regularization of neural networks (e.g. dropout) is a widespread technique in deep learning that allows for better generalization.
We show that data augmentation during the training improves the performance of both deterministic and versions of the same model.
However, the improvements obtained by the data augmentation completely eliminate the empirical regularization gains, making the performance of neural ODE and neural SDE negligible.
arXiv Detail & Related papers (2020-02-22T22:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.