Linear-time inference for Gaussian Processes on one dimension
- URL: http://arxiv.org/abs/2003.05554v5
- Date: Tue, 12 Oct 2021 18:04:14 GMT
- Title: Linear-time inference for Gaussian Processes on one dimension
- Authors: Jackson Loper, David Blei, John P. Cunningham, and Liam Paninski
- Abstract summary: We investigate data sampled on one dimension for which state-space models are popular due to their linearly-scaling computational costs.
We provide the first general proof of conjecture that state-space models are general, able to approximate any one-dimensional Gaussian Processes.
We develop parallelized algorithms for performing inference and learning in the LEG model, test the algorithm on real and synthetic data, and demonstrate scaling to datasets with billions of samples.
- Score: 17.77516394591124
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gaussian Processes (GPs) provide powerful probabilistic frameworks for
interpolation, forecasting, and smoothing, but have been hampered by
computational scaling issues. Here we investigate data sampled on one dimension
(e.g., a scalar or vector time series sampled at arbitrarily-spaced intervals),
for which state-space models are popular due to their linearly-scaling
computational costs. It has long been conjectured that state-space models are
general, able to approximate any one-dimensional GP. We provide the first
general proof of this conjecture, showing that any stationary GP on one
dimension with vector-valued observations governed by a Lebesgue-integrable
continuous kernel can be approximated to any desired precision using a
specifically-chosen state-space model: the Latent Exponentially Generated (LEG)
family. This new family offers several advantages compared to the general
state-space model: it is always stable (no unbounded growth), the covariance
can be computed in closed form, and its parameter space is unconstrained
(allowing straightforward estimation via gradient descent). The theorem's proof
also draws connections to Spectral Mixture Kernels, providing insight about
this popular family of kernels. We develop parallelized algorithms for
performing inference and learning in the LEG model, test the algorithm on real
and synthetic data, and demonstrate scaling to datasets with billions of
samples.
Related papers
- von Mises Quasi-Processes for Bayesian Circular Regression [57.88921637944379]
We explore a family of expressive and interpretable distributions over circle-valued random functions.
The resulting probability model has connections with continuous spin models in statistical physics.
For posterior inference, we introduce a new Stratonovich-like augmentation that lends itself to fast Markov Chain Monte Carlo sampling.
arXiv Detail & Related papers (2024-06-19T01:57:21Z) - Implicit Manifold Gaussian Process Regression [49.0787777751317]
Gaussian process regression is widely used to provide well-calibrated uncertainty estimates.
It struggles with high-dimensional data because of the implicit low-dimensional manifold upon which the data actually lies.
In this paper we propose a technique capable of inferring implicit structure directly from data (labeled and unlabeled) in a fully differentiable way.
arXiv Detail & Related papers (2023-10-30T09:52:48Z) - Probabilistic Unrolling: Scalable, Inverse-Free Maximum Likelihood
Estimation for Latent Gaussian Models [69.22568644711113]
We introduce probabilistic unrolling, a method that combines Monte Carlo sampling with iterative linear solvers to circumvent matrix inversions.
Our theoretical analyses reveal that unrolling and backpropagation through the iterations of the solver can accelerate gradient estimation for maximum likelihood estimation.
In experiments on simulated and real data, we demonstrate that probabilistic unrolling learns latent Gaussian models up to an order of magnitude faster than gradient EM, with minimal losses in model performance.
arXiv Detail & Related papers (2023-06-05T21:08:34Z) - Random Smoothing Regularization in Kernel Gradient Descent Learning [24.383121157277007]
We present a framework for random smoothing regularization that can adaptively learn a wide range of ground truth functions belonging to the classical Sobolev spaces.
Our estimator can adapt to the structural assumptions of the underlying data and avoid the curse of dimensionality.
arXiv Detail & Related papers (2023-05-05T13:37:34Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Intrinsic Gaussian Process on Unknown Manifolds with Probabilistic
Metrics [5.582101184758529]
This article presents a novel approach to construct Intrinsic Gaussian Processes for regression on unknown manifold with probabilistic metrics in point clouds.
The geometry of manifold is in general different from the usual Euclidean geometry.
The applications of GPUM are illustrated in the simulation studies on the Swiss roll, high dimensional real datasets of WiFi signals and image data examples.
arXiv Detail & Related papers (2023-01-16T17:42:40Z) - FaDIn: Fast Discretized Inference for Hawkes Processes with General
Parametric Kernels [82.53569355337586]
This work offers an efficient solution to temporal point processes inference using general parametric kernels with finite support.
The method's effectiveness is evaluated by modeling the occurrence of stimuli-induced patterns from brain signals recorded with magnetoencephalography (MEG)
Results show that the proposed approach leads to an improved estimation of pattern latency than the state-of-the-art.
arXiv Detail & Related papers (2022-10-10T12:35:02Z) - Gaussian Process Subspace Regression for Model Reduction [7.41244589428771]
Subspace-valued functions arise in a wide range of problems, including parametric reduced order modeling (PROM)
In PROM, each parameter point can be associated with a subspace, which is used for Petrov-Galerkin projections of large system matrices.
We propose a novel Bayesian non model for subspace prediction: the Gaussian Process Subspace regression (GPS) model.
arXiv Detail & Related papers (2021-07-09T20:41:23Z) - Combining Pseudo-Point and State Space Approximations for Sum-Separable
Gaussian Processes [48.64129867897491]
We show that there is a simple and elegant way to combine pseudo-point methods with the state space GP approximation framework to get the best of both worlds.
We demonstrate that the combined approach is more scalable and applicable to a greater range of epidemiology--temporal problems than either method on its own.
arXiv Detail & Related papers (2021-06-18T16:30:09Z) - Sparse Algorithms for Markovian Gaussian Processes [18.999495374836584]
Sparse Markovian processes combine the use of inducing variables with efficient Kalman filter-likes recursion.
We derive a general site-based approach to approximate the non-Gaussian likelihood with local Gaussian terms, called sites.
Our approach results in a suite of novel sparse extensions to algorithms from both the machine learning and signal processing, including variational inference, expectation propagation, and the classical nonlinear Kalman smoothers.
The derived methods are suited to literature-temporal data, where the model has separate inducing points in both time and space.
arXiv Detail & Related papers (2021-03-19T09:50:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.