Certified and fast computations with shallow covariance kernels
- URL: http://arxiv.org/abs/2001.09187v4
- Date: Thu, 12 Nov 2020 18:48:41 GMT
- Title: Certified and fast computations with shallow covariance kernels
- Authors: Daniel Kressner, Jonas Latz, Stefano Massei, Elisabeth Ullmann
- Abstract summary: We introduce and analyze a new and certified algorithm for the low-rank approximation of a parameterized family of covariance operators.
The proposed algorithm provides the basis of a fast sampling procedure for parameter dependent random fields.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many techniques for data science and uncertainty quantification demand
efficient tools to handle Gaussian random fields, which are defined in terms of
their mean functions and covariance operators. Recently, parameterized Gaussian
random fields have gained increased attention, due to their higher degree of
flexibility. However, especially if the random field is parameterized through
its covariance operator, classical random field discretization techniques fail
or become inefficient. In this work we introduce and analyze a new and
certified algorithm for the low-rank approximation of a parameterized family of
covariance operators which represents an extension of the adaptive cross
approximation method for symmetric positive definite matrices. The algorithm
relies on an affine linear expansion of the covariance operator with respect to
the parameters, which needs to be computed in a preprocessing step using, e.g.,
the empirical interpolation method. We discuss and test our new approach for
isotropic covariance kernels, such as Mat\'ern kernels. The numerical results
demonstrate the advantages of our approach in terms of computational time and
confirm that the proposed algorithm provides the basis of a fast sampling
procedure for parameter dependent Gaussian random fields.
Related papers
- Sample-efficient Bayesian Optimisation Using Known Invariances [56.34916328814857]
We show that vanilla and constrained BO algorithms are inefficient when optimising invariant objectives.
We derive a bound on the maximum information gain of these invariant kernels.
We use our method to design a current drive system for a nuclear fusion reactor, finding a high-performance solution.
arXiv Detail & Related papers (2024-10-22T12:51:46Z) - Gaussian Processes Sampling with Sparse Grids under Additive Schwarz Preconditioner [6.408773096179187]
We propose a scalable algorithm for sampling random realizations of the prior and posterior of GP models.
The proposed algorithm leverages inducing points approximation with sparse grids, as well as additive Schwarz preconditioners.
arXiv Detail & Related papers (2024-08-01T00:19:36Z) - Variance-Reducing Couplings for Random Features [57.73648780299374]
Random features (RFs) are a popular technique to scale up kernel methods in machine learning.
We find couplings to improve RFs defined on both Euclidean and discrete input spaces.
We reach surprising conclusions about the benefits and limitations of variance reduction as a paradigm.
arXiv Detail & Related papers (2024-05-26T12:25:09Z) - A general error analysis for randomized low-rank approximation with application to data assimilation [42.57210316104905]
We propose a framework for the analysis of the low-rank approximation error in Frobenius norm for centered and non-standard matrices.
Under minimal assumptions, we derive accurate bounds in expectation and probability.
Our bounds have clear interpretations that enable us to derive properties and motivate practical choices.
arXiv Detail & Related papers (2024-05-08T04:51:56Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - FAVOR#: Sharp Attention Kernel Approximations via New Classes of
Positive Random Features [39.282051468586666]
We propose parameterized, positive, non-trigonometric RFs which approximate Gaussian and softmax- Kernels.
We show that our methods lead to variance reduction in practice and outperform previous methods in a kernel regression task.
We also present FAVOR#, a method for self-attention approximation in Transformers.
arXiv Detail & Related papers (2023-02-01T22:43:29Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Linear Time Kernel Matrix Approximation via Hyperspherical Harmonics [3.24890820102255]
We propose a new technique for constructing low-rank approximations of matrices that arise in kernel methods for machine learning.
Our approach pairs a novel automatically constructed analytic expansion of the underlying kernel function with a data-dependent compression step to further optimize the approximation.
Experimental results show our approach compares favorably to the commonly used Nystrom method with respect to both accuracy for a given rank and computational time for a given accuracy across a variety of kernels, dimensions, and datasets.
arXiv Detail & Related papers (2022-02-08T05:19:39Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Sparse Algorithms for Markovian Gaussian Processes [18.999495374836584]
Sparse Markovian processes combine the use of inducing variables with efficient Kalman filter-likes recursion.
We derive a general site-based approach to approximate the non-Gaussian likelihood with local Gaussian terms, called sites.
Our approach results in a suite of novel sparse extensions to algorithms from both the machine learning and signal processing, including variational inference, expectation propagation, and the classical nonlinear Kalman smoothers.
The derived methods are suited to literature-temporal data, where the model has separate inducing points in both time and space.
arXiv Detail & Related papers (2021-03-19T09:50:53Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.