Free-Form Variational Inference for Gaussian Process State-Space Models
- URL: http://arxiv.org/abs/2302.09921v2
- Date: Mon, 17 Jul 2023 00:59:31 GMT
- Title: Free-Form Variational Inference for Gaussian Process State-Space Models
- Authors: Xuhui Fan, Edwin V. Bonilla, Terence J. O'Kane, Scott A. Sisson
- Abstract summary: We propose a new method for inference in Bayesian GPSSMs.
Our method is based on freeform variational inference via inducing Hamiltonian Monte Carlo.
We show that our approach can learn transition dynamics and latent states more accurately than competing methods.
- Score: 21.644570034208506
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gaussian process state-space models (GPSSMs) provide a principled and
flexible approach to modeling the dynamics of a latent state, which is observed
at discrete-time points via a likelihood model. However, inference in GPSSMs is
computationally and statistically challenging due to the large number of latent
variables in the model and the strong temporal dependencies between them. In
this paper, we propose a new method for inference in Bayesian GPSSMs, which
overcomes the drawbacks of previous approaches, namely over-simplified
assumptions, and high computational requirements. Our method is based on
free-form variational inference via stochastic gradient Hamiltonian Monte Carlo
within the inducing-variable formalism. Furthermore, by exploiting our proposed
variational distribution, we provide a collapsed extension of our method where
the inducing variables are marginalized analytically. We also showcase results
when combining our framework with particle MCMC methods. We show that, on six
real-world datasets, our approach can learn transition dynamics and latent
states more accurately than competing methods.
Related papers
- Latent Space Energy-based Neural ODEs [73.01344439786524]
This paper introduces a novel family of deep dynamical models designed to represent continuous-time sequence data.
We train the model using maximum likelihood estimation with Markov chain Monte Carlo.
Experiments on oscillating systems, videos and real-world state sequences (MuJoCo) illustrate that ODEs with the learnable energy-based prior outperform existing counterparts.
arXiv Detail & Related papers (2024-09-05T18:14:22Z) - Fully Bayesian Differential Gaussian Processes through Stochastic Differential Equations [7.439555720106548]
We propose a fully Bayesian approach that treats the kernel hyper parameters as random variables and constructs coupled differential equations (SDEs) to learn their posterior distribution and that of inducing points.
Our approach provides a time-varying, comprehensive, and realistic posterior approximation through coupling variables using SDE methods.
Our work opens up exciting research avenues for advancing Bayesian inference and offers a powerful modeling tool for continuous-time Gaussian processes.
arXiv Detail & Related papers (2024-08-12T11:41:07Z) - Linear Noise Approximation Assisted Bayesian Inference on Mechanistic Model of Partially Observed Stochastic Reaction Network [2.325005809983534]
This paper develops an efficient Bayesian inference approach for partially observed enzymatic reaction network (SRN)
An interpretable linear noise approximation (LNA) metamodel is proposed to approximate the likelihood of observations.
An efficient posterior sampling approach is developed by utilizing the gradients of the derived likelihood to speed up the convergence of Markov Chain Monte Carlo.
arXiv Detail & Related papers (2024-05-05T01:54:21Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Distributionally Robust Model-based Reinforcement Learning with Large
State Spaces [55.14361269378122]
Three major challenges in reinforcement learning are the complex dynamical systems with large state spaces, the costly data acquisition processes, and the deviation of real-world dynamics from the training environment deployment.
We study distributionally robust Markov decision processes with continuous state spaces under the widely used Kullback-Leibler, chi-square, and total variation uncertainty sets.
We propose a model-based approach that utilizes Gaussian Processes and the maximum variance reduction algorithm to efficiently learn multi-output nominal transition dynamics.
arXiv Detail & Related papers (2023-09-05T13:42:11Z) - Towards Efficient Modeling and Inference in Multi-Dimensional Gaussian
Process State-Space Models [11.13664702335756]
We propose to integrate the efficient transformed Gaussian process (ETGP) into the GPSSM to efficiently model the transition function in high-dimensional latent state space.
We also develop a corresponding variational inference algorithm that surpasses existing methods in terms of parameter count and computational complexity.
arXiv Detail & Related papers (2023-09-03T04:34:33Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Likelihood-Free Inference in State-Space Models with Unknown Dynamics [71.94716503075645]
We introduce a method for inferring and predicting latent states in state-space models where observations can only be simulated, and transition dynamics are unknown.
We propose a way of doing likelihood-free inference (LFI) of states and state prediction with a limited number of simulations.
arXiv Detail & Related papers (2021-11-02T12:33:42Z) - Variational Inference for Continuous-Time Switching Dynamical Systems [29.984955043675157]
We present a model based on an Markov jump process modulating a subordinated diffusion process.
We develop a new continuous-time variational inference algorithm.
We extensively evaluate our algorithm under the model assumption and for real-world examples.
arXiv Detail & Related papers (2021-09-29T15:19:51Z) - Self-Supervised Hybrid Inference in State-Space Models [0.0]
We perform approximate inference in state-space models that allow for nonlinear higher-order Markov chains in latent space.
We do not rely on an additional parameterization of the generative model or supervision via uncorrupted observations or ground truth latent states.
We obtain competitive results on the chaotic Lorenz system compared to a fully supervised approach and outperform a method based on variational inference.
arXiv Detail & Related papers (2021-07-28T13:26:14Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.