Markovian Gaussian Process Variational Autoencoders
- URL: http://arxiv.org/abs/2207.05543v3
- Date: Wed, 16 Aug 2023 22:53:02 GMT
- Title: Markovian Gaussian Process Variational Autoencoders
- Authors: Harrison Zhu, Carles Balsells Rodas, Yingzhen Li
- Abstract summary: We leverage the equivalent discrete state space representation of Markovian GPs to enable linear time GPVAE training via Kalman filtering and smoothing.
For our model, Markovian GPVAE (MGPVAE), we show on a variety of high-dimensional temporal tasks that our method performs favourably compared to existing approaches.
- Score: 19.686719654642392
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sequential VAEs have been successfully considered for many high-dimensional
time series modelling problems, with many variant models relying on
discrete-time mechanisms such as recurrent neural networks (RNNs). On the other
hand, continuous-time methods have recently gained attraction, especially in
the context of irregularly-sampled time series, where they can better handle
the data than discrete-time methods. One such class are Gaussian process
variational autoencoders (GPVAEs), where the VAE prior is set as a Gaussian
process (GP). However, a major limitation of GPVAEs is that it inherits the
cubic computational cost as GPs, making it unattractive to practioners. In this
work, we leverage the equivalent discrete state space representation of
Markovian GPs to enable linear time GPVAE training via Kalman filtering and
smoothing. For our model, Markovian GPVAE (MGPVAE), we show on a variety of
high-dimensional temporal and spatiotemporal tasks that our method performs
favourably compared to existing approaches whilst being computationally highly
scalable.
Related papers
- Computation-Aware Gaussian Processes: Model Selection And Linear-Time Inference [55.150117654242706]
We show that model selection for computation-aware GPs trained on 1.8 million data points can be done within a few hours on a single GPU.
As a result of this work, Gaussian processes can be trained on large-scale datasets without significantly compromising their ability to quantify uncertainty.
arXiv Detail & Related papers (2024-11-01T21:11:48Z) - Markovian Gaussian Process: A Universal State-Space Representation for Stationary Temporal Gaussian Process [2.600709013150986]
We introduce a universal method that allows an LDS to mirror stationary temporal GPs.
This state-space representation, known as the Markovian Gaussian Process (Markovian GP), leverages the flexibility of kernel functions while maintaining efficient linear computation.
arXiv Detail & Related papers (2024-06-29T10:50:23Z) - Closed-form Filtering for Non-linear Systems [83.91296397912218]
We propose a new class of filters based on Gaussian PSD Models, which offer several advantages in terms of density approximation and computational efficiency.
We show that filtering can be efficiently performed in closed form when transitions and observations are Gaussian PSD Models.
Our proposed estimator enjoys strong theoretical guarantees, with estimation error that depends on the quality of the approximation and is adaptive to the regularity of the transition probabilities.
arXiv Detail & Related papers (2024-02-15T08:51:49Z) - Graph Spatiotemporal Process for Multivariate Time Series Anomaly
Detection with Missing Values [67.76168547245237]
We introduce a novel framework called GST-Pro, which utilizes a graphtemporal process and anomaly scorer to detect anomalies.
Our experimental results show that the GST-Pro method can effectively detect anomalies in time series data and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2024-01-11T10:10:16Z) - Generative Modeling of Regular and Irregular Time Series Data via Koopman VAEs [50.25683648762602]
We introduce Koopman VAE, a new generative framework that is based on a novel design for the model prior.
Inspired by Koopman theory, we represent the latent conditional prior dynamics using a linear map.
KoVAE outperforms state-of-the-art GAN and VAE methods across several challenging synthetic and real-world time series generation benchmarks.
arXiv Detail & Related papers (2023-10-04T07:14:43Z) - Revisiting Active Sets for Gaussian Process Decoders [0.0]
We develop a new estimate of the log-marginal likelihood based on recently discovered links to cross-validation.
We demonstrate that the resulting active sets (SAS) approximation significantly improves the robustness of GP decoder training.
arXiv Detail & Related papers (2022-09-10T10:49:31Z) - Shallow and Deep Nonparametric Convolutions for Gaussian Processes [0.0]
We introduce a nonparametric process convolution formulation for GPs that alleviates weaknesses by using a functional sampling approach.
We propose a composition of these nonparametric convolutions that serves as an alternative to classic deep GP models.
arXiv Detail & Related papers (2022-06-17T19:03:04Z) - Non-Gaussian Gaussian Processes for Few-Shot Regression [71.33730039795921]
We propose an invertible ODE-based mapping that operates on each component of the random variable vectors and shares the parameters across all of them.
NGGPs outperform the competing state-of-the-art approaches on a diversified set of benchmarks and applications.
arXiv Detail & Related papers (2021-10-26T10:45:25Z) - Encoding spatiotemporal priors with VAEs for small-area estimation [2.4783465852664324]
We propose a deep generative modelling approach to tackle a noveltemporal setting.
We approximate a class of prior samplings through prior fitting of a variational autoencoder (VAE)
VAE allows inference to become incredibly efficient due to independently distributed latent latent Gaussian space representation.
We demonstrate the utility of our VAE two stage approach on Bayesian, small-area estimation tasks.
arXiv Detail & Related papers (2021-10-20T08:14:15Z) - Learning Gaussian Graphical Models via Multiplicative Weights [54.252053139374205]
We adapt an algorithm of Klivans and Meka based on the method of multiplicative weight updates.
The algorithm enjoys a sample complexity bound that is qualitatively similar to others in the literature.
It has a low runtime $O(mp2)$ in the case of $m$ samples and $p$ nodes, and can trivially be implemented in an online manner.
arXiv Detail & Related papers (2020-02-20T10:50:58Z) - Sequential Gaussian Processes for Online Learning of Nonstationary
Functions [9.997259201098602]
We propose a sequential Monte Carlo algorithm to fit infinite mixtures of GPs that capture non-stationary behavior while allowing for online, distributed inference.
Our approach empirically improves performance over state-of-the-art methods for online GP estimation in the presence of non-stationarity in time-series data.
arXiv Detail & Related papers (2019-05-24T02:29:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.