Adaptive quadrature schemes for Bayesian inference via active learning
- URL: http://arxiv.org/abs/2006.00535v3
- Date: Tue, 19 Jan 2021 16:44:59 GMT
- Title: Adaptive quadrature schemes for Bayesian inference via active learning
- Authors: F. Llorente, L. Martino, V. Elvira, D. Delgado, J. L\'opez-Santiago
- Abstract summary: We propose novel adaptive quadrature schemes based on an active learning procedure.
We consider an interpolative approach for building a surrogate density, combining it with Monte Carlo sampling methods and other quadrature rules.
Numerical results show the advantage of the proposed approach, including a challenging inference problem in an astronomic model.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Numerical integration and emulation are fundamental topics across scientific
fields. We propose novel adaptive quadrature schemes based on an active
learning procedure. We consider an interpolative approach for building a
surrogate posterior density, combining it with Monte Carlo sampling methods and
other quadrature rules. The nodes of the quadrature are sequentially chosen by
maximizing a suitable acquisition function, which takes into account the
current approximation of the posterior and the positions of the nodes. This
maximization does not require additional evaluations of the true posterior. We
introduce two specific schemes based on Gaussian and Nearest Neighbors (NN)
bases. For the Gaussian case, we also provide a novel procedure for fitting the
bandwidth parameter, in order to build a suitable emulator of a density
function. With both techniques, we always obtain a positive estimation of the
marginal likelihood (a.k.a., Bayesian evidence). An equivalent importance
sampling interpretation is also described, which allows the design of extended
schemes. Several theoretical results are provided and discussed. Numerical
results show the advantage of the proposed approach, including a challenging
inference problem in an astronomic dynamical model, with the goal of revealing
the number of planets orbiting a star.
Related papers
- Maximum a Posteriori Estimation for Linear Structural Dynamics Models Using Bayesian Optimization with Rational Polynomial Chaos Expansions [0.01578888899297715]
We propose an extension to an existing sparse Bayesian learning approach for MAP estimation.
We introduce a Bayesian optimization approach, which allows to adaptively enrich the experimental design.
By combining the sparsity-inducing learning procedure with the experimental design, we effectively reduce the number of model evaluations.
arXiv Detail & Related papers (2024-08-07T06:11:37Z) - von Mises Quasi-Processes for Bayesian Circular Regression [57.88921637944379]
We explore a family of expressive and interpretable distributions over circle-valued random functions.
The resulting probability model has connections with continuous spin models in statistical physics.
For posterior inference, we introduce a new Stratonovich-like augmentation that lends itself to fast Markov Chain Monte Carlo sampling.
arXiv Detail & Related papers (2024-06-19T01:57:21Z) - Surrogate modeling for Bayesian optimization beyond a single Gaussian
process [62.294228304646516]
We propose a novel Bayesian surrogate model to balance exploration with exploitation of the search space.
To endow function sampling with scalability, random feature-based kernel approximation is leveraged per GP model.
To further establish convergence of the proposed EGP-TS to the global optimum, analysis is conducted based on the notion of Bayesian regret.
arXiv Detail & Related papers (2022-05-27T16:43:10Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Sensing Cox Processes via Posterior Sampling and Positive Bases [56.82162768921196]
We study adaptive sensing of point processes, a widely used model from spatial statistics.
We model the intensity function as a sample from a truncated Gaussian process, represented in a specially constructed positive basis.
Our adaptive sensing algorithms use Langevin dynamics and are based on posterior sampling (textscCox-Thompson) and top-two posterior sampling (textscTop2) principles.
arXiv Detail & Related papers (2021-10-21T14:47:06Z) - Manifold learning-based polynomial chaos expansions for high-dimensional
surrogate models [0.0]
We introduce a manifold learning-based method for uncertainty quantification (UQ) in describing systems.
The proposed method is able to achieve highly accurate approximations which ultimately lead to the significant acceleration of UQ tasks.
arXiv Detail & Related papers (2021-07-21T00:24:15Z) - Distributed Variational Bayesian Algorithms Over Sensor Networks [6.572330981878818]
We propose two novel distributed VB algorithms for general Bayesian inference problem.
The proposed algorithms have excellent performance, which are almost as good as the corresponding centralized VB algorithm relying on all data available in a fusion center.
arXiv Detail & Related papers (2020-11-27T08:12:18Z) - Pathwise Conditioning of Gaussian Processes [72.61885354624604]
Conventional approaches for simulating Gaussian process posteriors view samples as draws from marginal distributions of process values at finite sets of input locations.
This distribution-centric characterization leads to generative strategies that scale cubically in the size of the desired random vector.
We show how this pathwise interpretation of conditioning gives rise to a general family of approximations that lend themselves to efficiently sampling Gaussian process posteriors.
arXiv Detail & Related papers (2020-11-08T17:09:37Z) - Deep Importance Sampling based on Regression for Model Inversion and
Emulation [0.0]
We present an adaptive importance sampling (AIS) framework called Regression-based Adaptive Deep Importance Sampling (RADIS)
RADIS is based on a deep architecture of two (or more) nested IS schemes, in order to draw samples from the constructed emulator.
A real-world application in remote sensing model inversion and emulation confirms the validity of the approach.
arXiv Detail & Related papers (2020-10-20T15:12:30Z) - Mean-Field Approximation to Gaussian-Softmax Integral with Application
to Uncertainty Estimation [23.38076756988258]
We propose a new single-model based approach to quantify uncertainty in deep neural networks.
We use a mean-field approximation formula to compute an analytically intractable integral.
Empirically, the proposed approach performs competitively when compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-06-13T07:32:38Z) - Distributed Averaging Methods for Randomized Second Order Optimization [54.51566432934556]
We consider distributed optimization problems where forming the Hessian is computationally challenging and communication is a bottleneck.
We develop unbiased parameter averaging methods for randomized second order optimization that employ sampling and sketching of the Hessian.
We also extend the framework of second order averaging methods to introduce an unbiased distributed optimization framework for heterogeneous computing systems.
arXiv Detail & Related papers (2020-02-16T09:01:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.