Output-Dependent Gaussian Process State-Space Model
- URL: http://arxiv.org/abs/2212.07608v1
- Date: Thu, 15 Dec 2022 04:05:39 GMT
- Title: Output-Dependent Gaussian Process State-Space Model
- Authors: Zhidi Lin, Lei Cheng, Feng Yin, Lexi Xu, Shuguang Cui
- Abstract summary: This paper proposes an output-dependent and more realistic GPSSM.
Experiments on both synthetic and real datasets demonstrate the superiority of the output-dependent GPSSM in terms of learning and inference performance.
- Score: 34.34146690947695
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gaussian process state-space model (GPSSM) is a fully probabilistic
state-space model that has attracted much attention over the past decade.
However, the outputs of the transition function in the existing GPSSMs are
assumed to be independent, meaning that the GPSSMs cannot exploit the inductive
biases between different outputs and lose certain model capacities. To address
this issue, this paper proposes an output-dependent and more realistic GPSSM by
utilizing the well-known, simple yet practical linear model of
coregionalization (LMC) framework to represent the output dependency. To
jointly learn the output-dependent GPSSM and infer the latent states, we
propose a variational sparse GP-based learning method that only gently
increases the computational complexity. Experiments on both synthetic and real
datasets demonstrate the superiority of the output-dependent GPSSM in terms of
learning and inference performance.
Related papers
- Short-Long Convolutions Help Hardware-Efficient Linear Attention to Focus on Long Sequences [60.489682735061415]
We propose CHELA, which replaces state space models with short-long convolutions and implements linear attention in a divide-and-conquer manner.
Our experiments on the Long Range Arena benchmark and language modeling tasks demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2024-06-12T12:12:38Z) - Theoretical Foundations of Deep Selective State-Space Models [13.971499161967083]
Deep SSMs demonstrate outstanding performance across a diverse set of domains.
Recent developments show that if the linear recurrence powering SSMs allows for multiplicative interactions between inputs and hidden states.
We show that when random linear recurrences are equipped with simple input-controlled transitions, then the hidden state is provably a low-dimensional projection of a powerful mathematical object.
arXiv Detail & Related papers (2024-02-29T11:20:16Z) - Synthetic location trajectory generation using categorical diffusion
models [50.809683239937584]
Diffusion models (DPMs) have rapidly evolved to be one of the predominant generative models for the simulation of synthetic data.
We propose using DPMs for the generation of synthetic individual location trajectories (ILTs) which are sequences of variables representing physical locations visited by individuals.
arXiv Detail & Related papers (2024-02-19T15:57:39Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Towards Efficient Modeling and Inference in Multi-Dimensional Gaussian
Process State-Space Models [11.13664702335756]
We propose to integrate the efficient transformed Gaussian process (ETGP) into the GPSSM to efficiently model the transition function in high-dimensional latent state space.
We also develop a corresponding variational inference algorithm that surpasses existing methods in terms of parameter count and computational complexity.
arXiv Detail & Related papers (2023-09-03T04:34:33Z) - Free-Form Variational Inference for Gaussian Process State-Space Models [21.644570034208506]
We propose a new method for inference in Bayesian GPSSMs.
Our method is based on freeform variational inference via inducing Hamiltonian Monte Carlo.
We show that our approach can learn transition dynamics and latent states more accurately than competing methods.
arXiv Detail & Related papers (2023-02-20T11:34:16Z) - Towards Flexibility and Interpretability of Gaussian Process State-Space
Model [4.75409418039844]
We propose a new class of probabilistic state-space models called TGPSSMs.
TGPSSMs leverage a parametric normalizing flow to enrich the GP priors in the standard GPSSM.
We present a scalable variational inference algorithm that offers a flexible and optimal structure for the variational distribution of latent states.
arXiv Detail & Related papers (2023-01-21T01:26:26Z) - PAC Reinforcement Learning for Predictive State Representations [60.00237613646686]
We study online Reinforcement Learning (RL) in partially observable dynamical systems.
We focus on the Predictive State Representations (PSRs) model, which is an expressive model that captures other well-known models.
We develop a novel model-based algorithm for PSRs that can learn a near optimal policy in sample complexity scalingly.
arXiv Detail & Related papers (2022-07-12T17:57:17Z) - Collaborative Nonstationary Multivariate Gaussian Process Model [2.362467745272567]
We propose a novel model called the collaborative nonstationary Gaussian process model(CNMGP)
CNMGP allows us to model data in which outputs do not share a common input set, with a computational complexity independent of the size of the inputs and outputs.
We show that our model generally pro-vides better predictive performance than the state-of-the-art, and also provides estimates of time-varying correlations that differ across outputs.
arXiv Detail & Related papers (2021-06-01T18:25:22Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.