Variational sparse inverse Cholesky approximation for latent Gaussian
processes via double Kullback-Leibler minimization
- URL: http://arxiv.org/abs/2301.13303v2
- Date: Fri, 26 May 2023 19:48:13 GMT
- Title: Variational sparse inverse Cholesky approximation for latent Gaussian
processes via double Kullback-Leibler minimization
- Authors: Jian Cao, Myeongjong Kang, Felix Jimenez, Huiyan Sang, Florian
Schafer, Matthias Katzfuss
- Abstract summary: We combine a variational approximation of the posterior with a similar and efficient SIC-restricted Kullback-Leibler-optimal approximation of the prior.
For this setting, our variational approximation can be computed via gradient descent in polylogarithmic time per iteration.
We provide numerical comparisons showing that the proposed double-Kullback-Leibler-optimal Gaussian-process approximation (DKLGP) can sometimes be vastly more accurate for stationary kernels than alternative approaches.
- Score: 6.012173616364571
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To achieve scalable and accurate inference for latent Gaussian processes, we
propose a variational approximation based on a family of Gaussian distributions
whose covariance matrices have sparse inverse Cholesky (SIC) factors. We
combine this variational approximation of the posterior with a similar and
efficient SIC-restricted Kullback-Leibler-optimal approximation of the prior.
We then focus on a particular SIC ordering and nearest-neighbor-based sparsity
pattern resulting in highly accurate prior and posterior approximations. For
this setting, our variational approximation can be computed via stochastic
gradient descent in polylogarithmic time per iteration. We provide numerical
comparisons showing that the proposed double-Kullback-Leibler-optimal
Gaussian-process approximation (DKLGP) can sometimes be vastly more accurate
for stationary kernels than alternative approaches such as inducing-point and
mean-field approximations at similar computational complexity.
Related papers
- Differentially Private Optimization with Sparse Gradients [60.853074897282625]
We study differentially private (DP) optimization problems under sparsity of individual gradients.
Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for convex optimization with sparse gradients.
arXiv Detail & Related papers (2024-04-16T20:01:10Z) - Stochastic Gradient Descent for Gaussian Processes Done Right [86.83678041846971]
We show that when emphdone right -- by which we mean using specific insights from optimisation and kernel communities -- gradient descent is highly effective.
We introduce a emphstochastic dual descent algorithm, explain its design in an intuitive manner and illustrate the design choices.
Our method places Gaussian process regression on par with state-of-the-art graph neural networks for molecular binding affinity prediction.
arXiv Detail & Related papers (2023-10-31T16:15:13Z) - Iterative Methods for Vecchia-Laplace Approximations for Latent Gaussian Process Models [11.141688859736805]
We introduce and analyze several preconditioners, derive new convergence results, and propose novel methods for accurately approxing predictive variances.
In particular, we obtain a speed-up of an order of magnitude compared to Cholesky-based calculations.
All methods are implemented in a free C++ software library with high-level Python and R packages.
arXiv Detail & Related papers (2023-10-18T14:31:16Z) - Posterior and Computational Uncertainty in Gaussian Processes [52.26904059556759]
Gaussian processes scale prohibitively with the size of the dataset.
Many approximation methods have been developed, which inevitably introduce approximation error.
This additional source of uncertainty, due to limited computation, is entirely ignored when using the approximate posterior.
We develop a new class of methods that provides consistent estimation of the combined uncertainty arising from both the finite number of data observed and the finite amount of computation expended.
arXiv Detail & Related papers (2022-05-30T22:16:25Z) - How Good are Low-Rank Approximations in Gaussian Process Regression? [28.392890577684657]
We provide guarantees for approximate Gaussian Process (GP) regression resulting from two common low-rank kernel approximations.
We provide experiments on both simulated data and standard benchmarks to evaluate the effectiveness of our theoretical bounds.
arXiv Detail & Related papers (2021-12-13T04:04:08Z) - Minibatch vs Local SGD with Shuffling: Tight Convergence Bounds and
Beyond [63.59034509960994]
We study shuffling-based variants: minibatch and local Random Reshuffling, which draw gradients without replacement.
For smooth functions satisfying the Polyak-Lojasiewicz condition, we obtain convergence bounds which show that these shuffling-based variants converge faster than their with-replacement counterparts.
We propose an algorithmic modification called synchronized shuffling that leads to convergence rates faster than our lower bounds in near-homogeneous settings.
arXiv Detail & Related papers (2021-10-20T02:25:25Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - How Good are Low-Rank Approximations in Gaussian Process Regression? [24.09582049403961]
We provide guarantees for approximate Gaussian Process (GP) regression resulting from two common low-rank kernel approximations.
We provide experiments on both simulated data and standard benchmarks to evaluate the effectiveness of our theoretical bounds.
arXiv Detail & Related papers (2020-04-03T14:15:10Z) - Sparse Orthogonal Variational Inference for Gaussian Processes [34.476453597078894]
We introduce a new interpretation of sparse variational approximations for Gaussian processes using inducing points.
We show that this formulation recovers existing approximations and at the same time allows to obtain tighter lower bounds on the marginal likelihood and new variational inference algorithms.
arXiv Detail & Related papers (2019-10-23T15:01:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.