Real-Time Variational Method for Learning Neural Trajectory and its
Dynamics
- URL: http://arxiv.org/abs/2305.11278v1
- Date: Thu, 18 May 2023 19:52:46 GMT
- Title: Real-Time Variational Method for Learning Neural Trajectory and its
Dynamics
- Authors: Matthew Dowling, Yuan Zhao, Il Memming Park
- Abstract summary: We introduce the exponential family variational Kalman filter (eVKF), an online Bayesian method aimed at inferring latent trajectories while simultaneously learning the dynamical system generating them.
We derive a closed-form variational analogue to the predict step of the Kalman filter which leads to a provably tighter bound on the ELBO compared to another online variational method.
We validate our method on synthetic and real-world data, and, notably, show that it achieves competitive performance.
- Score: 7.936841911281107
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Latent variable models have become instrumental in computational neuroscience
for reasoning about neural computation. This has fostered the development of
powerful offline algorithms for extracting latent neural trajectories from
neural recordings. However, despite the potential of real time alternatives to
give immediate feedback to experimentalists, and enhance experimental design,
they have received markedly less attention. In this work, we introduce the
exponential family variational Kalman filter (eVKF), an online recursive
Bayesian method aimed at inferring latent trajectories while simultaneously
learning the dynamical system generating them. eVKF works for arbitrary
likelihoods and utilizes the constant base measure exponential family to model
the latent state stochasticity. We derive a closed-form variational analogue to
the predict step of the Kalman filter which leads to a provably tighter bound
on the ELBO compared to another online variational method. We validate our
method on synthetic and real-world data, and, notably, show that it achieves
competitive performance
Related papers
- Modeling Latent Neural Dynamics with Gaussian Process Switching Linear Dynamical Systems [2.170477444239546]
We develop an approach that balances these two objectives: the Gaussian Process Switching Linear Dynamical System (gpSLDS)
Our method builds on previous work modeling the latent state evolution via a differential equation whose nonlinear dynamics are described by a Gaussian process (GP-SDEs)
Our approach resolves key limitations of the rSLDS such as artifactual oscillations in dynamics near discrete state boundaries, while also providing posterior uncertainty estimates of the dynamics.
arXiv Detail & Related papers (2024-07-19T15:32:15Z) - Correcting auto-differentiation in neural-ODE training [19.472357078065194]
We find that when a neural network employs high-order forms to approximate the underlying ODE flows, brute-force computation using auto-differentiation often produces non-converging artificial oscillations.
We propose a straightforward post-processing technique that effectively eliminates these oscillations, rectifies the computation and thus respects the updates of the underlying flow.
arXiv Detail & Related papers (2023-06-03T20:34:14Z) - Low-rank extended Kalman filtering for online learning of neural
networks from streaming data [71.97861600347959]
We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream.
The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior matrix.
In contrast to methods based on variational inference, our method is fully deterministic, and does not require step-size tuning.
arXiv Detail & Related papers (2023-05-31T03:48:49Z) - Cubature Kalman Filter Based Training of Hybrid Differential Equation
Recurrent Neural Network Physiological Dynamic Models [13.637931956861758]
We show how we can approximate missing ordinary differential equations with known ODEs using a neural network approximation.
Results indicate that this RBSE approach to training the NN parameters yields better outcomes (measurement/state estimation accuracy) than training the neural network with backpropagation.
arXiv Detail & Related papers (2021-10-12T15:38:13Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Bubblewrap: Online tiling and real-time flow prediction on neural
manifolds [2.624902795082451]
We propose a method that combines fast, stable dimensionality reduction with a soft tiling of the resulting neural manifold.
The resulting model can be trained at kiloHertz data rates, produces accurate approximations of neural dynamics within minutes, and generates predictions on submillisecond time scales.
arXiv Detail & Related papers (2021-08-31T16:01:45Z) - Distributional Gradient Matching for Learning Uncertain Neural Dynamics
Models [38.17499046781131]
We propose a novel approach towards estimating uncertain neural ODEs, avoiding the numerical integration bottleneck.
Our algorithm - distributional gradient matching (DGM) - jointly trains a smoother and a dynamics model and matches their gradients via minimizing a Wasserstein loss.
Our experiments show that, compared to traditional approximate inference methods based on numerical integration, our approach is faster to train, faster at predicting previously unseen trajectories, and in the context of neural ODEs, significantly more accurate.
arXiv Detail & Related papers (2021-06-22T08:40:51Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Deep Bayesian Active Learning for Accelerating Stochastic Simulation [74.58219903138301]
Interactive Neural Process (INP) is a deep active learning framework for simulations and with active learning approaches.
For active learning, we propose a novel acquisition function, Latent Information Gain (LIG), calculated in the latent space of NP based models.
The results demonstrate STNP outperforms the baselines in the learning setting and LIG achieves the state-of-the-art for active learning.
arXiv Detail & Related papers (2021-06-05T01:31:51Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Stochasticity in Neural ODEs: An Empirical Study [68.8204255655161]
Regularization of neural networks (e.g. dropout) is a widespread technique in deep learning that allows for better generalization.
We show that data augmentation during the training improves the performance of both deterministic and versions of the same model.
However, the improvements obtained by the data augmentation completely eliminate the empirical regularization gains, making the performance of neural ODE and neural SDE negligible.
arXiv Detail & Related papers (2020-02-22T22:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.