Linear Time GPs for Inferring Latent Trajectories from Neural Spike
Trains
- URL: http://arxiv.org/abs/2306.01802v1
- Date: Thu, 1 Jun 2023 16:31:36 GMT
- Title: Linear Time GPs for Inferring Latent Trajectories from Neural Spike
Trains
- Authors: Matthew Dowling, Yuan Zhao, Il Memming Park
- Abstract summary: We propose cvHM, a general inference framework for latent GP models leveraging Hida-Mat'ern kernels and conjugate variational inference (CVI)
We are able to perform variational inference of latent neural trajectories with linear time complexity for arbitrary likelihoods.
- Score: 7.936841911281107
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Latent Gaussian process (GP) models are widely used in neuroscience to
uncover hidden state evolutions from sequential observations, mainly in neural
activity recordings. While latent GP models provide a principled and powerful
solution in theory, the intractable posterior in non-conjugate settings
necessitates approximate inference schemes, which may lack scalability. In this
work, we propose cvHM, a general inference framework for latent GP models
leveraging Hida-Mat\'ern kernels and conjugate computation variational
inference (CVI). With cvHM, we are able to perform variational inference of
latent neural trajectories with linear time complexity for arbitrary
likelihoods. The reparameterization of stationary kernels using Hida-Mat\'ern
GPs helps us connect the latent variable models that encode prior assumptions
through dynamical systems to those that encode trajectory assumptions through
GPs. In contrast to previous work, we use bidirectional information filtering,
leading to a more concise implementation. Furthermore, we employ the Whittle
approximate likelihood to achieve highly efficient hyperparameter learning.
Related papers
- Thin and Deep Gaussian Processes [43.22976185646409]
This work proposes a novel synthesis of both previous approaches: Thin and Deep GP (TDGP)
We show with theoretical and experimental results that i) TDGP is tailored to specifically discover lower-dimensional manifold in the input data, ii) TDGP behaves well when increasing the number of layers, and iv) TDGP performs well in standard benchmark datasets.
arXiv Detail & Related papers (2023-10-17T18:50:24Z) - Data-driven Modeling and Inference for Bayesian Gaussian Process ODEs
via Double Normalizing Flows [28.62579476863723]
We introduce normalizing flows to re parameterize the ODE vector field, resulting in a data-driven prior distribution.
We also apply normalizing flows to the posterior inference of GP ODEs to resolve the issue of strong mean-field assumptions.
We validate the effectiveness of our approach on simulated dynamical systems and real-world human motion data.
arXiv Detail & Related papers (2023-09-17T09:28:47Z) - FaDIn: Fast Discretized Inference for Hawkes Processes with General
Parametric Kernels [82.53569355337586]
This work offers an efficient solution to temporal point processes inference using general parametric kernels with finite support.
The method's effectiveness is evaluated by modeling the occurrence of stimuli-induced patterns from brain signals recorded with magnetoencephalography (MEG)
Results show that the proposed approach leads to an improved estimation of pattern latency than the state-of-the-art.
arXiv Detail & Related papers (2022-10-10T12:35:02Z) - Non-Gaussian Gaussian Processes for Few-Shot Regression [71.33730039795921]
We propose an invertible ODE-based mapping that operates on each component of the random variable vectors and shares the parameters across all of them.
NGGPs outperform the competing state-of-the-art approaches on a diversified set of benchmarks and applications.
arXiv Detail & Related papers (2021-10-26T10:45:25Z) - Incremental Ensemble Gaussian Processes [53.3291389385672]
We propose an incremental ensemble (IE-) GP framework, where an EGP meta-learner employs an it ensemble of GP learners, each having a unique kernel belonging to a prescribed kernel dictionary.
With each GP expert leveraging the random feature-based approximation to perform online prediction and model update with it scalability, the EGP meta-learner capitalizes on data-adaptive weights to synthesize the per-expert predictions.
The novel IE-GP is generalized to accommodate time-varying functions by modeling structured dynamics at the EGP meta-learner and within each GP learner.
arXiv Detail & Related papers (2021-10-13T15:11:25Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Sparse Gaussian Process Variational Autoencoders [24.86751422740643]
Existing approaches for performing inference in GP-DGMs do not support sparse GP approximations based on points.
We develop the sparse Gaussian processal variation autoencoder (GP-VAE) characterised by the use of partial inference networks for parameterising sparse GP approximations.
arXiv Detail & Related papers (2020-10-20T10:19:56Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Deep Latent-Variable Kernel Learning [25.356503463916816]
We present a complete deep latent-variable kernel learning (DLVKL) model wherein the latent variables perform encoding for regularized representation.
Experiments imply that the DLVKL-NSDE performs similarly to the well calibrated GP on small datasets, and outperforms existing deep GPs on large datasets.
arXiv Detail & Related papers (2020-05-18T05:55:08Z) - SLEIPNIR: Deterministic and Provably Accurate Feature Expansion for
Gaussian Process Regression with Derivatives [86.01677297601624]
We propose a novel approach for scaling GP regression with derivatives based on quadrature Fourier features.
We prove deterministic, non-asymptotic and exponentially fast decaying error bounds which apply for both the approximated kernel as well as the approximated posterior.
arXiv Detail & Related papers (2020-03-05T14:33:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.