Sparse Gaussian Process Based On Hat Basis Functions
- URL: http://arxiv.org/abs/2006.08117v1
- Date: Mon, 15 Jun 2020 03:55:38 GMT
- Title: Sparse Gaussian Process Based On Hat Basis Functions
- Authors: Wenqi Fang, Huiyun Li, Hui Huang, Shaobo Dang, Zhejun Huang, Zheng
Wang
- Abstract summary: We propose a new sparse Gaussian process method to solve the unconstrained regression problem.
The proposed method reduces the overall computational complexity from $O(n3)$ in exact Gaussian process to $O(nm2)$ with $m$ hat basis functions and $n$ training data points.
- Score: 14.33021332215823
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gaussian process is one of the most popular non-parametric Bayesian
methodologies for modeling the regression problem. It is completely determined
by its mean and covariance functions. And its linear property makes it
relatively straightforward to solve the prediction problem. Although Gaussian
process has been successfully applied in many fields, it is still not enough to
deal with physical systems that satisfy inequality constraints. This issue has
been addressed by the so-called constrained Gaussian process in recent years.
In this paper, we extend the core ideas of constrained Gaussian process.
According to the range of training or test data, we redefine the hat basis
functions mentioned in the constrained Gaussian process. Based on hat basis
functions, we propose a new sparse Gaussian process method to solve the
unconstrained regression problem. Similar to the exact Gaussian process and
Gaussian process with Fully Independent Training Conditional approximation, our
method obtains satisfactory approximate results on open-source datasets or
analytical functions. In terms of performance, the proposed method reduces the
overall computational complexity from $O(n^{3})$ computation in exact Gaussian
process to $O(nm^{2})$ with $m$ hat basis functions and $n$ training data
points.
Related papers
- Stochastic Inexact Augmented Lagrangian Method for Nonconvex Expectation
Constrained Optimization [88.0031283949404]
Many real-world problems have complicated non functional constraints and use a large number of data points.
Our proposed method outperforms an existing method with the previously best-known result.
arXiv Detail & Related papers (2022-12-19T14:48:54Z) - Scale invariant process regression [0.0]
We propose a novel regression method that does not require specification of a kernel, length scale, variance, nor prior mean.
Experiments show that it is possible to derive a working machine learning method by assuming nothing but regularity and scale- and translation invariance.
arXiv Detail & Related papers (2022-08-22T17:32:33Z) - On the inability of Gaussian process regression to optimally learn
compositional functions [3.6525095710982916]
Deep Gaussian process priors can outperform Gaussian process priors if the target function has a compositional structure.
We show that if the true function is a generalized additive function, then the posterior based on any mean-zero Gaussian process can only recover the truth at a rate that is strictly slower than the minimax rate.
arXiv Detail & Related papers (2022-05-16T15:42:25Z) - Gaussian Processes and Statistical Decision-making in Non-Euclidean
Spaces [96.53463532832939]
We develop techniques for broadening the applicability of Gaussian processes.
We introduce a wide class of efficient approximations built from this viewpoint.
We develop a collection of Gaussian process models over non-Euclidean spaces.
arXiv Detail & Related papers (2022-02-22T01:42:57Z) - Gauss-Legendre Features for Gaussian Process Regression [7.37712470421917]
We present a Gauss-Legendre quadrature based approach for scaling up Gaussian process regression via a low rank approximation of the kernel matrix.
Our method is very much inspired by the well-known random Fourier features approach, which also builds low-rank approximations via numerical integration.
arXiv Detail & Related papers (2021-01-04T18:09:25Z) - Pathwise Conditioning of Gaussian Processes [72.61885354624604]
Conventional approaches for simulating Gaussian process posteriors view samples as draws from marginal distributions of process values at finite sets of input locations.
This distribution-centric characterization leads to generative strategies that scale cubically in the size of the desired random vector.
We show how this pathwise interpretation of conditioning gives rise to a general family of approximations that lend themselves to efficiently sampling Gaussian process posteriors.
arXiv Detail & Related papers (2020-11-08T17:09:37Z) - Disentangling the Gauss-Newton Method and Approximate Inference for
Neural Networks [96.87076679064499]
We disentangle the generalized Gauss-Newton and approximate inference for Bayesian deep learning.
We find that the Gauss-Newton method simplifies the underlying probabilistic model significantly.
The connection to Gaussian processes enables new function-space inference algorithms.
arXiv Detail & Related papers (2020-07-21T17:42:58Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z) - SLEIPNIR: Deterministic and Provably Accurate Feature Expansion for
Gaussian Process Regression with Derivatives [86.01677297601624]
We propose a novel approach for scaling GP regression with derivatives based on quadrature Fourier features.
We prove deterministic, non-asymptotic and exponentially fast decaying error bounds which apply for both the approximated kernel as well as the approximated posterior.
arXiv Detail & Related papers (2020-03-05T14:33:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.