Bayesian Probabilistic Numerical Integration with Tree-Based Models
- URL: http://arxiv.org/abs/2006.05371v3
- Date: Thu, 2 Dec 2021 13:35:54 GMT
- Title: Bayesian Probabilistic Numerical Integration with Tree-Based Models
- Authors: Harrison Zhu, Xing Liu, Ruya Kang, Zhichao Shen, Seth Flaxman and
Fran\c{c}ois-Xavier Briol
- Abstract summary: BQ is a method for solving numerical integration problems in a Bayesian manner.
BART-Int. BART priors are easy to tune and well-suited for discontinuous functions.
- Score: 5.353941016039247
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian quadrature (BQ) is a method for solving numerical integration
problems in a Bayesian manner, which allows users to quantify their uncertainty
about the solution. The standard approach to BQ is based on a Gaussian process
(GP) approximation of the integrand. As a result, BQ is inherently limited to
cases where GP approximations can be done in an efficient manner, thus often
prohibiting very high-dimensional or non-smooth target functions. This paper
proposes to tackle this issue with a new Bayesian numerical integration
algorithm based on Bayesian Additive Regression Trees (BART) priors, which we
call BART-Int. BART priors are easy to tune and well-suited for discontinuous
functions. We demonstrate that they also lend themselves naturally to a
sequential design setting and that explicit convergence rates can be obtained
in a variety of settings. The advantages and disadvantages of this new
methodology are highlighted on a set of benchmark tests including the Genz
functions, and on a Bayesian survey design problem.
Related papers
- BatchGFN: Generative Flow Networks for Batch Active Learning [80.73649229919454]
BatchGFN is a novel approach for pool-based active learning that uses generative flow networks to sample sets of data points proportional to a batch reward.
We show our approach enables principled sampling near-optimal utility batches at inference time with a single forward pass per point in the batch in toy regression problems.
arXiv Detail & Related papers (2023-06-26T20:41:36Z) - Bayesian Numerical Integration with Neural Networks [27.807370932294326]
We propose an alternative approach based on Bayesian neural networks which we call Bayesian Stein networks.
The key ingredients are a neural network architecture based on Stein operators, and an approximation of the Bayesian posterior based on the Laplace approximation.
We show that this leads to orders of magnitude speed-ups on the popular Genz functions benchmark, and on challenging problems arising in the Bayesian analysis of dynamical systems.
arXiv Detail & Related papers (2023-05-22T17:19:09Z) - Batch Bayesian optimisation via density-ratio estimation with guarantees [26.052368583196426]
We present a theoretical analysis of BORE's regret and an extension of the algorithm with improved uncertainty estimates.
We also show that BORE can be naturally extended to a batch optimisation setting by recasting the problem as approximate Bayesian inference.
arXiv Detail & Related papers (2022-09-22T00:42:18Z) - Surrogate modeling for Bayesian optimization beyond a single Gaussian
process [62.294228304646516]
We propose a novel Bayesian surrogate model to balance exploration with exploitation of the search space.
To endow function sampling with scalability, random feature-based kernel approximation is leveraged per GP model.
To further establish convergence of the proposed EGP-TS to the global optimum, analysis is conducted based on the notion of Bayesian regret.
arXiv Detail & Related papers (2022-05-27T16:43:10Z) - GP-BART: a novel Bayesian additive regression trees approach using
Gaussian processes [1.03590082373586]
The GP-BART model is an extension of BART which addresses the limitation by assuming GP priors for the predictions of each terminal node among all trees.
The model's effectiveness is demonstrated through applications to simulated and real-world data, surpassing the performance of traditional modeling approaches in various scenarios.
arXiv Detail & Related papers (2022-04-05T11:18:44Z) - Generalized Bayesian Additive Regression Trees Models: Beyond
Conditional Conjugacy [2.969705152497174]
In this article, we greatly expand the domain of applicability of BART to arbitrary emphgeneralized BART models.
Our algorithm requires only that the user be able to compute the likelihood and (optionally) its gradient and Fisher information.
The potential applications are very broad; we consider examples in survival analysis, structured heteroskedastic regression, and gamma shape regression.
arXiv Detail & Related papers (2022-02-20T22:52:07Z) - Bayesian decision-making under misspecified priors with applications to
meta-learning [64.38020203019013]
Thompson sampling and other sequential decision-making algorithms are popular approaches to tackle explore/exploit trade-offs in contextual bandits.
We show that performance degrades gracefully with misspecified priors.
arXiv Detail & Related papers (2021-07-03T23:17:26Z) - Randomised Gaussian Process Upper Confidence Bound for Bayesian
Optimisation [60.93091603232817]
We develop a modified Gaussian process upper confidence bound (GP-UCB) acquisition function.
This is done by sampling the exploration-exploitation trade-off parameter from a distribution.
We prove that this allows the expected trade-off parameter to be altered to better suit the problem without compromising a bound on the function's Bayesian regret.
arXiv Detail & Related papers (2020-06-08T00:28:41Z) - Best Arm Identification for Cascading Bandits in the Fixed Confidence
Setting [81.70513857417106]
We design and analyze CascadeBAI, an algorithm for finding the best set of $K$ items.
An upper bound on the time complexity of CascadeBAI is derived by overcoming a crucial analytical challenge.
We show, through the derivation of a lower bound on the time complexity, that the performance of CascadeBAI is optimal in some practical regimes.
arXiv Detail & Related papers (2020-01-23T16:47:52Z) - Distributionally Robust Bayesian Quadrature Optimization [60.383252534861136]
We study BQO under distributional uncertainty in which the underlying probability distribution is unknown except for a limited set of its i.i.d. samples.
A standard BQO approach maximizes the Monte Carlo estimate of the true expected objective given the fixed sample set.
We propose a novel posterior sampling based algorithm, namely distributionally robust BQO (DRBQO) for this purpose.
arXiv Detail & Related papers (2020-01-19T12:00:33Z) - Variational Bayesian Methods for Stochastically Constrained System
Design Problems [7.347989843033034]
We study system design problems stated as parameterized programs with a chance-constraint set.
We propose a variational Bayes-based method to approximately compute the posterior predictive integral.
arXiv Detail & Related papers (2020-01-06T05:21:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.