Encoding spatiotemporal priors with VAEs for small-area estimation
- URL: http://arxiv.org/abs/2110.10422v1
- Date: Wed, 20 Oct 2021 08:14:15 GMT
- Title: Encoding spatiotemporal priors with VAEs for small-area estimation
- Authors: Elizaveta Semenova, Yidan Xu, Adam Howes, Theo Rashid, Samir Bhatt,
Swapnil Mishra, Seth Flaxman
- Abstract summary: We propose a deep generative modelling approach to tackle a noveltemporal setting.
We approximate a class of prior samplings through prior fitting of a variational autoencoder (VAE)
VAE allows inference to become incredibly efficient due to independently distributed latent latent Gaussian space representation.
We demonstrate the utility of our VAE two stage approach on Bayesian, small-area estimation tasks.
- Score: 2.4783465852664324
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gaussian processes (GPs), implemented through multivariate Gaussian
distributions for a finite collection of data, are the most popular approach in
small-area spatiotemporal statistical modelling. In this context they are used
to encode correlation structures over space and time and can generalise well in
interpolation tasks. Despite their flexibility, off-the-shelf GPs present
serious computational challenges which limit their scalability and practical
usefulness in applied settings. Here, we propose a novel, deep generative
modelling approach to tackle this challenge: for a particular spatiotemporal
setting, we approximate a class of GP priors through prior sampling and
subsequent fitting of a variational autoencoder (VAE). Given a trained VAE, the
resultant decoder allows spatiotemporal inference to become incredibly
efficient due to the low dimensional, independently distributed latent Gaussian
space representation of the VAE. Once trained, inference using the VAE decoder
replaces the GP within a Bayesian sampling framework. This approach provides
tractable and easy-to-implement means of approximately encoding spatiotemporal
priors and facilitates efficient statistical inference. We demonstrate the
utility of our VAE two stage approach on Bayesian, small-area estimation tasks.
Related papers
- OPUS: Occupancy Prediction Using a Sparse Set [64.60854562502523]
We present a framework to simultaneously predict occupied locations and classes using a set of learnable queries.
OPUS incorporates a suite of non-trivial strategies to enhance model performance.
Our lightest model achieves superior RayIoU on the Occ3D-nuScenes dataset at near 2x FPS, while our heaviest model surpasses previous best results by 6.1 RayIoU.
arXiv Detail & Related papers (2024-09-14T07:44:22Z) - Linear Time GPs for Inferring Latent Trajectories from Neural Spike
Trains [7.936841911281107]
We propose cvHM, a general inference framework for latent GP models leveraging Hida-Mat'ern kernels and conjugate variational inference (CVI)
We are able to perform variational inference of latent neural trajectories with linear time complexity for arbitrary likelihoods.
arXiv Detail & Related papers (2023-06-01T16:31:36Z) - Compound Batch Normalization for Long-tailed Image Classification [77.42829178064807]
We propose a compound batch normalization method based on a Gaussian mixture.
It can model the feature space more comprehensively and reduce the dominance of head classes.
The proposed method outperforms existing methods on long-tailed image classification.
arXiv Detail & Related papers (2022-12-02T07:31:39Z) - Revisiting Active Sets for Gaussian Process Decoders [0.0]
We develop a new estimate of the log-marginal likelihood based on recently discovered links to cross-validation.
We demonstrate that the resulting active sets (SAS) approximation significantly improves the robustness of GP decoder training.
arXiv Detail & Related papers (2022-09-10T10:49:31Z) - Markovian Gaussian Process Variational Autoencoders [19.686719654642392]
We leverage the equivalent discrete state space representation of Markovian GPs to enable linear time GPVAE training via Kalman filtering and smoothing.
For our model, Markovian GPVAE (MGPVAE), we show on a variety of high-dimensional temporal tasks that our method performs favourably compared to existing approaches.
arXiv Detail & Related papers (2022-07-12T14:10:01Z) - Non-Gaussian Gaussian Processes for Few-Shot Regression [71.33730039795921]
We propose an invertible ODE-based mapping that operates on each component of the random variable vectors and shares the parameters across all of them.
NGGPs outperform the competing state-of-the-art approaches on a diversified set of benchmarks and applications.
arXiv Detail & Related papers (2021-10-26T10:45:25Z) - Gaussian Processes to speed up MCMC with automatic
exploratory-exploitation effect [1.0742675209112622]
We present a two-stage Metropolis-Hastings algorithm for sampling probabilistic models.
The key feature of the approach is the ability to learn the target distribution from scratch while sampling.
arXiv Detail & Related papers (2021-09-28T17:43:25Z) - Reducing the Amortization Gap in Variational Autoencoders: A Bayesian
Random Function Approach [38.45568741734893]
Inference in our GP model is done by a single feed forward pass through the network, significantly faster than semi-amortized methods.
We show that our approach attains higher test data likelihood than the state-of-the-arts on several benchmark datasets.
arXiv Detail & Related papers (2021-02-05T13:01:12Z) - Efficient semidefinite-programming-based inference for binary and
multi-class MRFs [83.09715052229782]
We propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF.
We extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver.
arXiv Detail & Related papers (2020-12-04T15:36:29Z) - Making Affine Correspondences Work in Camera Geometry Computation [62.7633180470428]
Local features provide region-to-region rather than point-to-point correspondences.
We propose guidelines for effective use of region-to-region matches in the course of a full model estimation pipeline.
Experiments show that affine solvers can achieve accuracy comparable to point-based solvers at faster run-times.
arXiv Detail & Related papers (2020-07-20T12:07:48Z) - Likelihood-Free Inference with Deep Gaussian Processes [70.74203794847344]
Surrogate models have been successfully used in likelihood-free inference to decrease the number of simulator evaluations.
We propose a Deep Gaussian Process (DGP) surrogate model that can handle more irregularly behaved target distributions.
Our experiments show how DGPs can outperform GPs on objective functions with multimodal distributions and maintain a comparable performance in unimodal cases.
arXiv Detail & Related papers (2020-06-18T14:24:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.