Sparse Kernel Gaussian Processes through Iterative Charted Refinement
(ICR)
- URL: http://arxiv.org/abs/2206.10634v1
- Date: Tue, 21 Jun 2022 18:00:01 GMT
- Title: Sparse Kernel Gaussian Processes through Iterative Charted Refinement
(ICR)
- Authors: Gordian Edenhofer and Reimar H. Leike and Philipp Frank and Torsten A.
En{\ss}lin
- Abstract summary: We present a new, generative method named Iterative Charted Refinement (ICR) to model Gaussian Processes.
ICR represents long- and short-range correlations by combining views of the modeled locations at varying resolutions with a user-provided coordinate chart.
ICR outperforms existing methods in terms of computational speed by one order of magnitude on the CPU and GPU.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gaussian Processes (GPs) are highly expressive, probabilistic models. A major
limitation is their computational complexity. Naively, exact GP inference
requires $\mathcal{O}(N^3)$ computations with $N$ denoting the number of
modeled points. Current approaches to overcome this limitation either rely on
sparse, structured or stochastic representations of data or kernel respectively
and usually involve nested optimizations to evaluate a GP. We present a new,
generative method named Iterative Charted Refinement (ICR) to model GPs on
nearly arbitrarily spaced points in $\mathcal{O}(N)$ time for decaying kernels
without nested optimizations. ICR represents long- as well as short-range
correlations by combining views of the modeled locations at varying resolutions
with a user-provided coordinate chart. In our experiment with points whose
spacings vary over two orders of magnitude, ICR's accuracy is comparable to
state-of-the-art GP methods. ICR outperforms existing methods in terms of
computational speed by one order of magnitude on the CPU and GPU and has
already been successfully applied to model a GP with $122$ billion parameters.
Related papers
- Exploiting Hankel-Toeplitz Structures for Fast Computation of Kernel Precision Matrices [14.25435308779899]
Hilbert-space Gaussian Process (HGP) approach offers hyper-independent basis function approximation for speeding up GP inference.
In this paper, we lower this dominating computational complexity to $mathcalO(NM)$ with no additional approximations.
Our contribution provides a pure speed-up of several existing, widely used, GP approximations, without further approximations.
arXiv Detail & Related papers (2024-08-05T09:45:31Z) - Shallow and Deep Nonparametric Convolutions for Gaussian Processes [0.0]
We introduce a nonparametric process convolution formulation for GPs that alleviates weaknesses by using a functional sampling approach.
We propose a composition of these nonparametric convolutions that serves as an alternative to classic deep GP models.
arXiv Detail & Related papers (2022-06-17T19:03:04Z) - Matching Pursuit Based Scheduling for Over-the-Air Federated Learning [67.59503935237676]
This paper develops a class of low-complexity device scheduling algorithms for over-the-air learning via the method of federated learning.
Compared to the state-of-the-art proposed scheme, the proposed scheme poses a drastically lower efficiency system.
The efficiency of the proposed scheme is confirmed via experiments on the CIFAR dataset.
arXiv Detail & Related papers (2022-06-14T08:14:14Z) - Exact Gaussian Processes for Massive Datasets via Non-Stationary
Sparsity-Discovering Kernels [0.0]
We propose to take advantage of naturally-structured sparsity by allowing the kernel to discover -- instead of induce -- sparse structure.
The principle of ultra-flexible, compactly-supported, and non-stationary kernels, combined with HPC and constrained optimization, lets us scale exact GPs well beyond 5 million data points.
arXiv Detail & Related papers (2022-05-18T16:56:53Z) - Scaling Gaussian Process Optimization by Evaluating a Few Unique
Candidates Multiple Times [119.41129787351092]
We show that sequential black-box optimization based on GPs can be made efficient by sticking to a candidate solution for multiple evaluation steps.
We modify two well-established GP-Opt algorithms, GP-UCB and GP-EI to adapt rules from batched GP-Opt.
arXiv Detail & Related papers (2022-01-30T20:42:14Z) - Fast and Scalable Spike and Slab Variable Selection in High-Dimensional
Gaussian Processes [12.667478571732449]
We develop a fast and scalable variational inference algorithm for the spike and slab GP that is tractable with arbitrary differentiable kernels.
In experiments our method consistently outperforms vanilla and sparse variational GPs whilst retaining similar runtimes.
arXiv Detail & Related papers (2021-11-08T15:13:24Z) - Non-Gaussian Gaussian Processes for Few-Shot Regression [71.33730039795921]
We propose an invertible ODE-based mapping that operates on each component of the random variable vectors and shares the parameters across all of them.
NGGPs outperform the competing state-of-the-art approaches on a diversified set of benchmarks and applications.
arXiv Detail & Related papers (2021-10-26T10:45:25Z) - Incremental Ensemble Gaussian Processes [53.3291389385672]
We propose an incremental ensemble (IE-) GP framework, where an EGP meta-learner employs an it ensemble of GP learners, each having a unique kernel belonging to a prescribed kernel dictionary.
With each GP expert leveraging the random feature-based approximation to perform online prediction and model update with it scalability, the EGP meta-learner capitalizes on data-adaptive weights to synthesize the per-expert predictions.
The novel IE-GP is generalized to accommodate time-varying functions by modeling structured dynamics at the EGP meta-learner and within each GP learner.
arXiv Detail & Related papers (2021-10-13T15:11:25Z) - Reducing the Variance of Gaussian Process Hyperparameter Optimization
with Preconditioning [54.01682318834995]
Preconditioning is a highly effective step for any iterative method involving matrix-vector multiplication.
We prove that preconditioning has an additional benefit that has been previously unexplored.
It simultaneously can reduce variance at essentially negligible cost.
arXiv Detail & Related papers (2021-07-01T06:43:11Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - MuyGPs: Scalable Gaussian Process Hyperparameter Estimation Using Local
Cross-Validation [1.2233362977312945]
We present MuyGPs, a novel efficient GP hyper parameter estimation method.
MuyGPs builds upon prior methods that take advantage of the nearest neighbors structure of the data.
We show that our method outperforms all known competitors both in terms of time-to-solution and the root mean squared error of the predictions.
arXiv Detail & Related papers (2021-04-29T18:10:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.