Adaptive finite element type decomposition of Gaussian processes
- URL: http://arxiv.org/abs/2505.24066v1
- Date: Thu, 29 May 2025 23:18:33 GMT
- Title: Adaptive finite element type decomposition of Gaussian processes
- Authors: Jaehoan Kim, Anirban Bhattacharya, Debdeep Pati,
- Abstract summary: We investigate a class of approximate Gaussian processes (GP) obtained by taking a linear combination of compactly supported basis functions.<n>We show that the SPDE associated approach with a fixed smoothness parameter leads to a suboptimal rate despite how the number of basis functions and bandwidth are chosen.<n>On the flip side, we showed that the later approach is rate-optimal adaptively over all levels of the underlying true function if an appropriate prior is placed on the number of basis functions.
- Score: 6.3604274262732945
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we investigate a class of approximate Gaussian processes (GP) obtained by taking a linear combination of compactly supported basis functions with the basis coefficients endowed with a dependent Gaussian prior distribution. This general class includes a popular approach that uses a finite element approximation of the stochastic partial differential equation (SPDE) associated with Mat\'ern GP. We explored another scalable alternative popularly used in the computer emulation literature where the basis coefficients at a lattice are drawn from a Gaussian process with an inverse-Gamma bandwidth. For both approaches, we study concentration rates of the posterior distribution. We demonstrated that the SPDE associated approach with a fixed smoothness parameter leads to a suboptimal rate despite how the number of basis functions and bandwidth are chosen when the underlying true function is sufficiently smooth. On the flip side, we showed that the later approach is rate-optimal adaptively over all smoothness levels of the underlying true function if an appropriate prior is placed on the number of basis functions. Efficient computational strategies are developed and numerics are provided to illustrate the theoretical results.
Related papers
- Differentially Private Optimization with Sparse Gradients [60.853074897282625]
We study differentially private (DP) optimization problems under sparsity of individual gradients.
Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for convex optimization with sparse gradients.
arXiv Detail & Related papers (2024-04-16T20:01:10Z) - Stochastic Gradient Descent for Gaussian Processes Done Right [86.83678041846971]
We show that when emphdone right -- by which we mean using specific insights from optimisation and kernel communities -- gradient descent is highly effective.
We introduce a emphstochastic dual descent algorithm, explain its design in an intuitive manner and illustrate the design choices.
Our method places Gaussian process regression on par with state-of-the-art graph neural networks for molecular binding affinity prediction.
arXiv Detail & Related papers (2023-10-31T16:15:13Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Variational sparse inverse Cholesky approximation for latent Gaussian
processes via double Kullback-Leibler minimization [6.012173616364571]
We combine a variational approximation of the posterior with a similar and efficient SIC-restricted Kullback-Leibler-optimal approximation of the prior.
For this setting, our variational approximation can be computed via gradient descent in polylogarithmic time per iteration.
We provide numerical comparisons showing that the proposed double-Kullback-Leibler-optimal Gaussian-process approximation (DKLGP) can sometimes be vastly more accurate for stationary kernels than alternative approaches.
arXiv Detail & Related papers (2023-01-30T21:50:08Z) - Relaxed Gaussian process interpolation: a goal-oriented approach to Bayesian optimization [0.0]
This work presents a new procedure for obtaining predictive distributions in the context of Gaussian process (GP) modeling.<n>The method called relaxed Gaussian process (reGP) provides better predictive distributions in ranges of interest.<n>It can be viewed as a goal-oriented method and becomes particularly interesting in Bayesian optimization.
arXiv Detail & Related papers (2022-06-07T06:26:46Z) - Surrogate modeling for Bayesian optimization beyond a single Gaussian
process [62.294228304646516]
We propose a novel Bayesian surrogate model to balance exploration with exploitation of the search space.
To endow function sampling with scalability, random feature-based kernel approximation is leveraged per GP model.
To further establish convergence of the proposed EGP-TS to the global optimum, analysis is conducted based on the notion of Bayesian regret.
arXiv Detail & Related papers (2022-05-27T16:43:10Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - SGB: Stochastic Gradient Bound Method for Optimizing Partition Functions [15.33098084159285]
This paper addresses the problem of optimizing partition functions in a learning setting.
We propose a variant of the bound majorization algorithm that relies on upper-bounding the partition function with a quadratic surrogate.
arXiv Detail & Related papers (2020-11-03T04:42:51Z) - Preferential Bayesian optimisation with Skew Gaussian Processes [0.225596179391365]
We show that the true posterior distribution of the preference function is a Skew Gaussian Process (SkewGP)
We derive an efficient method to compute the exact SkewGP posterior and use it as surrogate model for PBO employing standard acquisition functions.
We also show that our framework can be extended to deal with mixed preferential-categorical BO.
arXiv Detail & Related papers (2020-08-15T08:23:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.