Decomposed Linear Dynamical Systems (dLDS) for learning the latent
components of neural dynamics
- URL: http://arxiv.org/abs/2206.02972v2
- Date: Fri, 16 Jun 2023 20:20:34 GMT
- Title: Decomposed Linear Dynamical Systems (dLDS) for learning the latent
components of neural dynamics
- Authors: Noga Mudrik, Yenho Chen, Eva Yezerets, Christopher J. Rozell, and Adam
S. Charles
- Abstract summary: We propose a new decomposed dynamical system model that represents complex non-stationary and nonlinear dynamics of time series data.
Our model is trained through a dictionary learning procedure, where we leverage recent results in tracking sparse vectors over time.
In both continuous-time and discrete-time instructional examples we demonstrate that our model can well approximate the original system.
- Score: 6.829711787905569
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning interpretable representations of neural dynamics at a population
level is a crucial first step to understanding how observed neural activity
relates to perception and behavior. Models of neural dynamics often focus on
either low-dimensional projections of neural activity, or on learning dynamical
systems that explicitly relate to the neural state over time. We discuss how
these two approaches are interrelated by considering dynamical systems as
representative of flows on a low-dimensional manifold. Building on this
concept, we propose a new decomposed dynamical system model that represents
complex non-stationary and nonlinear dynamics of time series data as a sparse
combination of simpler, more interpretable components. Our model is trained
through a dictionary learning procedure, where we leverage recent results in
tracking sparse vectors over time. The decomposed nature of the dynamics is
more expressive than previous switched approaches for a given number of
parameters and enables modeling of overlapping and non-stationary dynamics. In
both continuous-time and discrete-time instructional examples we demonstrate
that our model can well approximate the original system, learn efficient
representations, and capture smooth transitions between dynamical modes,
focusing on intuitive low-dimensional non-stationary linear and nonlinear
systems. Furthermore, we highlight our model's ability to efficiently capture
and demix population dynamics generated from multiple independent subnetworks,
a task that is computationally impractical for switched models. Finally, we
apply our model to neural "full brain" recordings of C. elegans data,
illustrating a diversity of dynamics that is obscured when classified into
discrete states.
Related papers
- Modeling Latent Neural Dynamics with Gaussian Process Switching Linear Dynamical Systems [2.170477444239546]
We develop an approach that balances these two objectives: the Gaussian Process Switching Linear Dynamical System (gpSLDS)
Our method builds on previous work modeling the latent state evolution via a differential equation whose nonlinear dynamics are described by a Gaussian process (GP-SDEs)
Our approach resolves key limitations of the rSLDS such as artifactual oscillations in dynamics near discrete state boundaries, while also providing posterior uncertainty estimates of the dynamics.
arXiv Detail & Related papers (2024-07-19T15:32:15Z) - Learning System Dynamics without Forgetting [60.08612207170659]
Predicting trajectories of systems with unknown dynamics is crucial in various research fields, including physics and biology.
We present a novel framework of Mode-switching Graph ODE (MS-GODE), which can continually learn varying dynamics.
We construct a novel benchmark of biological dynamic systems, featuring diverse systems with disparate dynamics.
arXiv Detail & Related papers (2024-06-30T14:55:18Z) - Semi-Supervised Learning of Dynamical Systems with Neural Ordinary
Differential Equations: A Teacher-Student Model Approach [10.20098335268973]
TS-NODE is the first semi-supervised approach to modeling dynamical systems with NODE.
We show significant performance improvements over a baseline Neural ODE model on multiple dynamical system modeling tasks.
arXiv Detail & Related papers (2023-10-19T19:17:12Z) - Interpretable statistical representations of neural population dynamics and geometry [4.459704414303749]
We introduce a representation learning method, MARBLE, that decomposes on-manifold dynamics into local flow fields and maps them into a common latent space.
In simulated non-linear dynamical systems, recurrent neural networks, and experimental single-neuron recordings from primates and rodents, we discover emergent low-dimensional latent representations.
These representations are consistent across neural networks and animals, enabling the robust comparison of cognitive computations.
arXiv Detail & Related papers (2023-04-06T21:11:04Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Learning Fine Scale Dynamics from Coarse Observations via Inner
Recurrence [0.0]
Recent work has focused on data-driven learning of the evolution of unknown systems via deep neural networks (DNNs)
This paper presents a computational technique to learn the fine-scale dynamics from such coarsely observed data.
arXiv Detail & Related papers (2022-06-03T20:28:52Z) - Capturing Actionable Dynamics with Structured Latent Ordinary
Differential Equations [68.62843292346813]
We propose a structured latent ODE model that captures system input variations within its latent representation.
Building on a static variable specification, our model learns factors of variation for each input to the system, thus separating the effects of the system inputs in the latent space.
arXiv Detail & Related papers (2022-02-25T20:00:56Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Learning Continuous System Dynamics from Irregularly-Sampled Partial
Observations [33.63818978256567]
We present LG-ODE, a latent ordinary differential equation generative model for modeling multi-agent dynamic system with known graph structure.
It can simultaneously learn the embedding of high dimensional trajectories and infer continuous latent system dynamics.
Our model employs a novel encoder parameterized by a graph neural network that can infer initial states in an unsupervised way.
arXiv Detail & Related papers (2020-11-08T01:02:22Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Learning Stable Deep Dynamics Models [91.90131512825504]
We propose an approach for learning dynamical systems that are guaranteed to be stable over the entire state space.
We show that such learning systems are able to model simple dynamical systems and can be combined with additional deep generative models to learn complex dynamics.
arXiv Detail & Related papers (2020-01-17T00:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.