Towards Efficient Modeling and Inference in Multi-Dimensional Gaussian
Process State-Space Models
- URL: http://arxiv.org/abs/2309.01074v1
- Date: Sun, 3 Sep 2023 04:34:33 GMT
- Title: Towards Efficient Modeling and Inference in Multi-Dimensional Gaussian
Process State-Space Models
- Authors: Zhidi Lin, Juan Maro\~nas, Ying Li, Feng Yin and Sergios Theodoridis
- Abstract summary: We propose to integrate the efficient transformed Gaussian process (ETGP) into the GPSSM to efficiently model the transition function in high-dimensional latent state space.
We also develop a corresponding variational inference algorithm that surpasses existing methods in terms of parameter count and computational complexity.
- Score: 11.13664702335756
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Gaussian process state-space model (GPSSM) has attracted extensive
attention for modeling complex nonlinear dynamical systems. However, the
existing GPSSM employs separate Gaussian processes (GPs) for each latent state
dimension, leading to escalating computational complexity and parameter
proliferation, thus posing challenges for modeling dynamical systems with
high-dimensional latent states. To surmount this obstacle, we propose to
integrate the efficient transformed Gaussian process (ETGP) into the GPSSM,
which involves pushing a shared GP through multiple normalizing flows to
efficiently model the transition function in high-dimensional latent state
space. Additionally, we develop a corresponding variational inference algorithm
that surpasses existing methods in terms of parameter count and computational
complexity. Experimental results on diverse synthetic and real-world datasets
corroborate the efficiency of the proposed method, while also demonstrating its
ability to achieve similar inference performance compared to existing methods.
Code is available at \url{https://github.com/zhidilin/gpssmProj}.
Related papers
- Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Data-Driven Model Selections of Second-Order Particle Dynamics via
Integrating Gaussian Processes with Low-Dimensional Interacting Structures [0.9821874476902972]
We focus on the data-driven discovery of a general second-order particle-based model.
We present applications to modeling two real-world fish motion datasets.
arXiv Detail & Related papers (2023-11-01T23:45:15Z) - Subsurface Characterization using Ensemble-based Approaches with Deep
Generative Models [2.184775414778289]
Inverse modeling is limited for ill-posed, high-dimensional applications due to computational costs and poor prediction accuracy with sparse datasets.
We combine Wasserstein Geneversarative Adrial Network with Gradient Penalty (WGAN-GP) and Ensemble Smoother with Multiple Data Assimilation (ES-MDA)
WGAN-GP is trained to generate high-dimensional K fields from a low-dimensional latent space and ES-MDA updates the latent variables by assimilating available measurements.
arXiv Detail & Related papers (2023-10-02T01:27:10Z) - Distributionally Robust Model-based Reinforcement Learning with Large
State Spaces [55.14361269378122]
Three major challenges in reinforcement learning are the complex dynamical systems with large state spaces, the costly data acquisition processes, and the deviation of real-world dynamics from the training environment deployment.
We study distributionally robust Markov decision processes with continuous state spaces under the widely used Kullback-Leibler, chi-square, and total variation uncertainty sets.
We propose a model-based approach that utilizes Gaussian Processes and the maximum variance reduction algorithm to efficiently learn multi-output nominal transition dynamics.
arXiv Detail & Related papers (2023-09-05T13:42:11Z) - Heterogeneous Multi-Task Gaussian Cox Processes [61.67344039414193]
We present a novel extension of multi-task Gaussian Cox processes for modeling heterogeneous correlated tasks jointly.
A MOGP prior over the parameters of the dedicated likelihoods for classification, regression and point process tasks can facilitate sharing of information between heterogeneous tasks.
We derive a mean-field approximation to realize closed-form iterative updates for estimating model parameters.
arXiv Detail & Related papers (2023-08-29T15:01:01Z) - Dynamic Kernel-Based Adaptive Spatial Aggregation for Learned Image
Compression [63.56922682378755]
We focus on extending spatial aggregation capability and propose a dynamic kernel-based transform coding.
The proposed adaptive aggregation generates kernel offsets to capture valid information in the content-conditioned range to help transform.
Experimental results demonstrate that our method achieves superior rate-distortion performance on three benchmarks compared to the state-of-the-art learning-based methods.
arXiv Detail & Related papers (2023-08-17T01:34:51Z) - Free-Form Variational Inference for Gaussian Process State-Space Models [21.644570034208506]
We propose a new method for inference in Bayesian GPSSMs.
Our method is based on freeform variational inference via inducing Hamiltonian Monte Carlo.
We show that our approach can learn transition dynamics and latent states more accurately than competing methods.
arXiv Detail & Related papers (2023-02-20T11:34:16Z) - Towards Flexibility and Interpretability of Gaussian Process State-Space
Model [4.75409418039844]
We propose a new class of probabilistic state-space models called TGPSSMs.
TGPSSMs leverage a parametric normalizing flow to enrich the GP priors in the standard GPSSM.
We present a scalable variational inference algorithm that offers a flexible and optimal structure for the variational distribution of latent states.
arXiv Detail & Related papers (2023-01-21T01:26:26Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Scalable nonparametric Bayesian learning for heterogeneous and dynamic
velocity fields [8.744017403796406]
We develop a model for learning heterogeneous and dynamic patterns of velocity field data.
We show the effectiveness of our techniques to the NGSIM dataset of complex multi-vehicle interactions.
arXiv Detail & Related papers (2021-02-15T17:45:46Z) - Localized active learning of Gaussian process state space models [63.97366815968177]
A globally accurate model is not required to achieve good performance in many common control applications.
We propose an active learning strategy for Gaussian process state space models that aims to obtain an accurate model on a bounded subset of the state-action space.
By employing model predictive control, the proposed technique integrates information collected during exploration and adaptively improves its exploration strategy.
arXiv Detail & Related papers (2020-05-04T05:35:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.