Improved Convergence Rates for Sparse Approximation Methods in
Kernel-Based Learning
- URL: http://arxiv.org/abs/2202.04005v1
- Date: Tue, 8 Feb 2022 17:22:09 GMT
- Title: Improved Convergence Rates for Sparse Approximation Methods in
Kernel-Based Learning
- Authors: Sattar Vakili, Jonathan Scarlett, Da-shan Shiu, Alberto Bernacchia
- Abstract summary: Kernel-based models such as kernel ridge regression and Gaussian processes are ubiquitous in machine learning applications.
Existing sparse approximation methods can yield a significant reduction in the computational cost.
We provide novel confidence intervals for the Nystr"om method and the sparse variational Gaussian processes approximation method.
- Score: 48.08663378234329
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Kernel-based models such as kernel ridge regression and Gaussian processes
are ubiquitous in machine learning applications for regression and
optimization. It is well known that a serious downside for kernel-based models
is the high computational cost; given a dataset of $n$ samples, the cost grows
as $\mathcal{O}(n^3)$. Existing sparse approximation methods can yield a
significant reduction in the computational cost, effectively reducing the real
world cost down to as low as $\mathcal{O}(n)$ in certain cases. Despite this
remarkable empirical success, significant gaps remain in the existing results
for the analytical confidence bounds on the error due to approximation. In this
work, we provide novel confidence intervals for the Nystr\"om method and the
sparse variational Gaussian processes approximation method. Our confidence
intervals lead to improved error bounds in both regression and optimization. We
establish these confidence intervals using novel interpretations of the
approximate (surrogate) posterior variance of the models.
Related papers
- Linear cost and exponentially convergent approximation of Gaussian Matérn processes [43.341057405337295]
computational cost for inference and prediction of statistical models based on Gaussian processes scales cubicly with the number of observations.
We develop a method with linear cost and with a covariance error that decreases exponentially fast in the order $m$ of the proposed approximation.
The method is based on an optimal rational approximation of the spectral density and results in an approximation that can be represented as a sum of $m$ independent Markov processes.
arXiv Detail & Related papers (2024-10-16T19:57:15Z) - Iterative Methods for Full-Scale Gaussian Process Approximations for Large Spatial Data [9.913418444556486]
We show how iterative methods can be used to reduce the computational costs for calculating likelihoods, gradients, and predictive distributions with FSAs.
We also present a novel, accurate, and fast way to calculate predictive variances relying on estimations and iterative methods.
All methods are implemented in a free C++ software library with high-level Python and R packages.
arXiv Detail & Related papers (2024-05-23T12:25:22Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Posterior and Computational Uncertainty in Gaussian Processes [52.26904059556759]
Gaussian processes scale prohibitively with the size of the dataset.
Many approximation methods have been developed, which inevitably introduce approximation error.
This additional source of uncertainty, due to limited computation, is entirely ignored when using the approximate posterior.
We develop a new class of methods that provides consistent estimation of the combined uncertainty arising from both the finite number of data observed and the finite amount of computation expended.
arXiv Detail & Related papers (2022-05-30T22:16:25Z) - Linear Time Kernel Matrix Approximation via Hyperspherical Harmonics [3.24890820102255]
We propose a new technique for constructing low-rank approximations of matrices that arise in kernel methods for machine learning.
Our approach pairs a novel automatically constructed analytic expansion of the underlying kernel function with a data-dependent compression step to further optimize the approximation.
Experimental results show our approach compares favorably to the commonly used Nystrom method with respect to both accuracy for a given rank and computational time for a given accuracy across a variety of kernels, dimensions, and datasets.
arXiv Detail & Related papers (2022-02-08T05:19:39Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Gauss-Legendre Features for Gaussian Process Regression [7.37712470421917]
We present a Gauss-Legendre quadrature based approach for scaling up Gaussian process regression via a low rank approximation of the kernel matrix.
Our method is very much inspired by the well-known random Fourier features approach, which also builds low-rank approximations via numerical integration.
arXiv Detail & Related papers (2021-01-04T18:09:25Z) - Beyond the Mean-Field: Structured Deep Gaussian Processes Improve the
Predictive Uncertainties [12.068153197381575]
We propose a novel variational family that allows for retaining covariances between latent processes while achieving fast convergence.
We provide an efficient implementation of our new approach and apply it to several benchmark datasets.
It yields excellent results and strikes a better balance between accuracy and calibrated uncertainty estimates than its state-of-the-art alternatives.
arXiv Detail & Related papers (2020-05-22T11:10:59Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z) - SLEIPNIR: Deterministic and Provably Accurate Feature Expansion for
Gaussian Process Regression with Derivatives [86.01677297601624]
We propose a novel approach for scaling GP regression with derivatives based on quadrature Fourier features.
We prove deterministic, non-asymptotic and exponentially fast decaying error bounds which apply for both the approximated kernel as well as the approximated posterior.
arXiv Detail & Related papers (2020-03-05T14:33:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.