When are Iterative Gaussian Processes Reliably Accurate?
- URL: http://arxiv.org/abs/2112.15246v1
- Date: Fri, 31 Dec 2021 00:02:18 GMT
- Title: When are Iterative Gaussian Processes Reliably Accurate?
- Authors: Wesley J. Maddox, Sanyam Kapoor, Andrew Gordon Wilson
- Abstract summary: Lanczos decompositions have achieved scalable Gaussian process inference with highly accurate point predictions.
We investigate CG tolerance, preconditioner rank, and Lanczos decomposition rank.
We show that LGS-BFB is a compelling for Iterative GPs, achieving convergence with fewer updates.
- Score: 38.523693700243975
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While recent work on conjugate gradient methods and Lanczos decompositions
have achieved scalable Gaussian process inference with highly accurate point
predictions, in several implementations these iterative methods appear to
struggle with numerical instabilities in learning kernel hyperparameters, and
poor test likelihoods. By investigating CG tolerance, preconditioner rank, and
Lanczos decomposition rank, we provide a particularly simple prescription to
correct these issues: we recommend that one should use a small CG tolerance
($\epsilon \leq 0.01$) and a large root decomposition size ($r \geq 5000$).
Moreover, we show that L-BFGS-B is a compelling optimizer for Iterative GPs,
achieving convergence with fewer gradient updates.
Related papers
- From Gradient Clipping to Normalization for Heavy Tailed SGD [19.369399536643773]
Recent empirical evidence indicates that machine learning applications involve heavy-tailed noise, which challenges the standard assumptions of bounded variance in practice.
In this paper, we show that it is possible to achieve tightness of the gradient-dependent noise convergence problem under tailed noise.
arXiv Detail & Related papers (2024-10-17T17:59:01Z) - Low-Precision Arithmetic for Fast Gaussian Processes [39.720581185327816]
Low-precision arithmetic has had a transformative effect on the training of neural networks.
We propose a multi-faceted approach involving conjugate gradients with re-orthogonalization, mixed precision, and preconditioning.
Our approach significantly improves the numerical stability and practical performance of conjugate gradients in low-precision over a wide range of settings.
arXiv Detail & Related papers (2022-07-14T12:20:46Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - Gaussian Process Inference Using Mini-batch Stochastic Gradient Descent:
Convergence Guarantees and Empirical Benefits [21.353189917487512]
gradient descent (SGD) and its variants have established themselves as the go-to algorithms for machine learning problems.
We take a step forward by proving minibatch SGD converges to a critical point of the full log-likelihood loss function.
Our theoretical guarantees hold provided that the kernel functions exhibit exponential or eigendecay.
arXiv Detail & Related papers (2021-11-19T22:28:47Z) - Minibatch vs Local SGD with Shuffling: Tight Convergence Bounds and
Beyond [63.59034509960994]
We study shuffling-based variants: minibatch and local Random Reshuffling, which draw gradients without replacement.
For smooth functions satisfying the Polyak-Lojasiewicz condition, we obtain convergence bounds which show that these shuffling-based variants converge faster than their with-replacement counterparts.
We propose an algorithmic modification called synchronized shuffling that leads to convergence rates faster than our lower bounds in near-homogeneous settings.
arXiv Detail & Related papers (2021-10-20T02:25:25Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Top-k Training of GANs: Improving GAN Performance by Throwing Away Bad
Samples [67.11669996924671]
We introduce a simple (one line of code) modification to the Generative Adversarial Network (GAN) training algorithm.
When updating the generator parameters, we zero out the gradient contributions from the elements of the batch that the critic scores as least realistic'
We show that this top-k update' procedure is a generally applicable improvement.
arXiv Detail & Related papers (2020-02-14T19:27:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.