Neural Integro-Differential Equations
- URL: http://arxiv.org/abs/2206.14282v1
- Date: Tue, 28 Jun 2022 20:39:35 GMT
- Title: Neural Integro-Differential Equations
- Authors: Emanuele Zappala, Antonio Henrique de Oliveira Fonseca, Andrew Henry
Moberly, Michael James Higley, Chadi Abdallah, Jessica Cardin, David van Dijk
- Abstract summary: Integro-Differential Equations (IDEs) are generalizations of differential equations that comprise both an integral and a differential component.
NIDE is a framework that models ordinary and integral components ofIDEs using neural networks.
We show that NIDE can decompose dynamics into its Markovian and non-Markovian constituents.
- Score: 2.001149416674759
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modeling continuous dynamical systems from discretely sampled observations is
a fundamental problem in data science. Often, such dynamics are the result of
non-local processes that present an integral over time. As such, these systems
are modeled with Integro-Differential Equations (IDEs); generalizations of
differential equations that comprise both an integral and a differential
component. For example, brain dynamics are not accurately modeled by
differential equations since their behavior is non-Markovian, i.e. dynamics are
in part dictated by history. Here, we introduce the Neural IDE (NIDE), a
framework that models ordinary and integral components of IDEs using neural
networks. We test NIDE on several toy and brain activity datasets and
demonstrate that NIDE outperforms other models, including Neural ODE. These
tasks include time extrapolation as well as predicting dynamics from unseen
initial conditions, which we test on whole-cortex activity recordings in freely
behaving mice. Further, we show that NIDE can decompose dynamics into its
Markovian and non-Markovian constituents, via the learned integral operator,
which we test on fMRI brain activity recordings of people on ketamine. Finally,
the integrand of the integral operator provides a latent space that gives
insight into the underlying dynamics, which we demonstrate on wide-field brain
imaging recordings. Altogether, NIDE is a novel approach that enables modeling
of complex non-local dynamics with neural networks.
Related papers
- Meta-Dynamical State Space Models for Integrative Neural Data Analysis [8.625491800829224]
Learning shared structure across environments facilitates rapid learning and adaptive behavior in neural systems.
There has been limited work exploiting the shared structure in neural activity during similar tasks for learning latent dynamics from neural recordings.
We propose a novel approach for meta-learning this solution space from task-related neural activity of trained animals.
arXiv Detail & Related papers (2024-10-07T19:35:49Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - Neural Fractional Differential Equations [2.812395851874055]
Fractional Differential Equations (FDEs) are essential tools for modelling complex systems in science and engineering.
We propose the Neural FDE, a novel deep neural network architecture that adjusts a FDE to the dynamics of data.
arXiv Detail & Related papers (2024-03-05T07:45:29Z) - Equivariant Graph Neural Operator for Modeling 3D Dynamics [148.98826858078556]
We propose Equivariant Graph Neural Operator (EGNO) to directly models dynamics as trajectories instead of just next-step prediction.
EGNO explicitly learns the temporal evolution of 3D dynamics where we formulate the dynamics as a function over time and learn neural operators to approximate it.
Comprehensive experiments in multiple domains, including particle simulations, human motion capture, and molecular dynamics, demonstrate the significantly superior performance of EGNO against existing methods.
arXiv Detail & Related papers (2024-01-19T21:50:32Z) - Learning Latent Dynamics via Invariant Decomposition and
(Spatio-)Temporal Transformers [0.6767885381740952]
We propose a method for learning dynamical systems from high-dimensional empirical data.
We focus on the setting in which data are available from multiple different instances of a system.
We study behaviour through simple theoretical analyses and extensive experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2023-06-21T07:52:07Z) - Neural Integral Equations [3.087238735145305]
We introduce a method for learning unknown integral operators from data using an IE solver.
We also present Attentional Neural Integral Equations (ANIE), which replaces the integral with self-attention.
arXiv Detail & Related papers (2022-09-30T02:32:17Z) - Neural Laplace: Learning diverse classes of differential equations in
the Laplace domain [86.52703093858631]
We propose a unified framework for learning diverse classes of differential equations (DEs) including all the aforementioned ones.
Instead of modelling the dynamics in the time domain, we model it in the Laplace domain, where the history-dependencies and discontinuities in time can be represented as summations of complex exponentials.
In the experiments, Neural Laplace shows superior performance in modelling and extrapolating the trajectories of diverse classes of DEs.
arXiv Detail & Related papers (2022-06-10T02:14:59Z) - Ranking of Communities in Multiplex Spatiotemporal Models of Brain
Dynamics [0.0]
We propose an interpretation of neural HMMs as multiplex brain state graph models we term Hidden Markov Graph Models (HMs)
This interpretation allows for dynamic brain activity to be analysed using the full repertoire of network analysis techniques.
We produce a new tool for determining important communities of brain regions using a random walk-based procedure.
arXiv Detail & Related papers (2022-03-17T12:14:09Z) - Capturing Actionable Dynamics with Structured Latent Ordinary
Differential Equations [68.62843292346813]
We propose a structured latent ODE model that captures system input variations within its latent representation.
Building on a static variable specification, our model learns factors of variation for each input to the system, thus separating the effects of the system inputs in the latent space.
arXiv Detail & Related papers (2022-02-25T20:00:56Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Neural Additive Models: Interpretable Machine Learning with Neural Nets [77.66871378302774]
Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks.
We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models.
NAMs learn a linear combination of neural networks that each attend to a single input feature.
arXiv Detail & Related papers (2020-04-29T01:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.