Uncertainty quantification using martingales for misspecified Gaussian
processes
- URL: http://arxiv.org/abs/2006.07368v2
- Date: Tue, 2 Mar 2021 11:35:13 GMT
- Title: Uncertainty quantification using martingales for misspecified Gaussian
processes
- Authors: Willie Neiswanger, Aaditya Ramdas
- Abstract summary: We address uncertainty quantification for Gaussian processes (GPs) under misspecified priors.
We construct a confidence sequence (CS) for the unknown function using martingale techniques.
Our CS is statistically valid and empirically outperforms standard GP methods.
- Score: 52.22233158357913
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We address uncertainty quantification for Gaussian processes (GPs) under
misspecified priors, with an eye towards Bayesian Optimization (BO). GPs are
widely used in BO because they easily enable exploration based on posterior
uncertainty bands. However, this convenience comes at the cost of robustness: a
typical function encountered in practice is unlikely to have been drawn from
the data scientist's prior, in which case uncertainty estimates can be
misleading, and the resulting exploration can be suboptimal. We present a
frequentist approach to GP/BO uncertainty quantification. We utilize the GP
framework as a working model, but do not assume correctness of the prior. We
instead construct a confidence sequence (CS) for the unknown function using
martingale techniques. There is a necessary cost to achieving robustness: if
the prior was correct, posterior GP bands are narrower than our CS.
Nevertheless, when the prior is wrong, our CS is statistically valid and
empirically outperforms standard GP methods, in terms of both coverage and
utility for BO. Additionally, we demonstrate that powered likelihoods provide
robustness against model misspecification.
Related papers
- Information-Theoretic Safe Bayesian Optimization [59.758009422067005]
We consider a sequential decision making task, where the goal is to optimize an unknown function without evaluating parameters that violate an unknown (safety) constraint.
Most current methods rely on a discretization of the domain and cannot be directly extended to the continuous case.
We propose an information-theoretic safe exploration criterion that directly exploits the GP posterior to identify the most informative safe parameters to evaluate.
arXiv Detail & Related papers (2024-02-23T14:31:10Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Leveraging Locality and Robustness to Achieve Massively Scalable
Gaussian Process Regression [1.3518297878940662]
We introduce a new perspective by exploring robustness properties and limiting behaviour of GP nearest-neighbour (GPnn) prediction.
As the data-size n increases, accuracy of estimated parameters and GP model assumptions become increasingly irrelevant to GPnn predictive accuracy.
We show that this source of inaccuracy can be corrected for, thereby achieving both well-calibrated uncertainty measures and accurate predictions at remarkably low computational cost.
arXiv Detail & Related papers (2023-06-26T14:32:46Z) - Accounting for Gaussian Process Imprecision in Bayesian Optimization [0.0]
We study the effect of the Gaussian processes' prior specifications on classical BO's convergence.
We introduce PROBO as a generalization of BO that aims at rendering the method more robust towards prior mean parameter misspecification.
We test our approach against classical BO on a real-world problem from material science and observe PROBO to converge faster.
arXiv Detail & Related papers (2021-11-16T08:45:39Z) - Misspecified Gaussian Process Bandit Optimization [59.30399661155574]
Kernelized bandit algorithms have shown strong empirical and theoretical performance for this problem.
We introduce a emphmisspecified kernelized bandit setting where the unknown function can be $epsilon$--uniformly approximated by a function with a bounded norm in some Reproducing Kernel Hilbert Space (RKHS)
We show that our algorithm achieves optimal dependence on $epsilon$ with no prior knowledge of misspecification.
arXiv Detail & Related papers (2021-11-09T09:00:02Z) - Gaussian Process Uniform Error Bounds with Unknown Hyperparameters for
Safety-Critical Applications [71.23286211775084]
We introduce robust Gaussian process uniform error bounds in settings with unknown hyper parameters.
Our approach computes a confidence region in the space of hyper parameters, which enables us to obtain a probabilistic upper bound for the model error.
Experiments show that the bound performs significantly better than vanilla and fully Bayesian processes.
arXiv Detail & Related papers (2021-09-06T17:10:01Z) - MuyGPs: Scalable Gaussian Process Hyperparameter Estimation Using Local
Cross-Validation [1.2233362977312945]
We present MuyGPs, a novel efficient GP hyper parameter estimation method.
MuyGPs builds upon prior methods that take advantage of the nearest neighbors structure of the data.
We show that our method outperforms all known competitors both in terms of time-to-solution and the root mean squared error of the predictions.
arXiv Detail & Related papers (2021-04-29T18:10:21Z) - Adversarial Robustness Guarantees for Gaussian Processes [22.403365399119107]
Gaussian processes (GPs) enable principled computation of model uncertainty, making them attractive for safety-critical applications.
We present a framework to analyse adversarial robustness of GPs, defined as invariance of the model's decision to bounded perturbations.
We develop a branch-and-bound scheme to refine the bounds and show, for any $epsilon > 0$, that our algorithm is guaranteed to converge to values $epsilon$-close to the actual values in finitely many iterations.
arXiv Detail & Related papers (2021-04-07T15:14:56Z) - Near-linear Time Gaussian Process Optimization with Adaptive Batching
and Resparsification [119.41129787351092]
We introduce BBKB, the first no-regret GP optimization algorithm that provably runs in near-linear time and selects candidates in batches.
We show that the same bound can be used to adaptively delay costly updates to the sparse GP approximation, achieving a near-constant per-step amortized cost.
arXiv Detail & Related papers (2020-02-23T17:43:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.