Scalable method for Bayesian experimental design without integrating
over posterior distribution
- URL: http://arxiv.org/abs/2306.17615v2
- Date: Fri, 11 Aug 2023 08:09:17 GMT
- Title: Scalable method for Bayesian experimental design without integrating
over posterior distribution
- Authors: Vinh Hoang, Luis Espath, Sebastian Krumscheid, Ra\'ul Tempone
- Abstract summary: We address the computational efficiency in solving the A-optimal Bayesian design of experiments problems.
A-optimality is a widely used and easy-to-interpret criterion for Bayesian experimental design.
This study presents a novel likelihood-free approach to the A-optimal experimental design.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We address the computational efficiency in solving the A-optimal Bayesian
design of experiments problems for which the observational map is based on
partial differential equations and, consequently, is computationally expensive
to evaluate. A-optimality is a widely used and easy-to-interpret criterion for
Bayesian experimental design. This criterion seeks the optimal experimental
design by minimizing the expected conditional variance, which is also known as
the expected posterior variance. This study presents a novel likelihood-free
approach to the A-optimal experimental design that does not require sampling or
integrating the Bayesian posterior distribution. The expected conditional
variance is obtained via the variance of the conditional expectation using the
law of total variance, and we take advantage of the orthogonal projection
property to approximate the conditional expectation. We derive an asymptotic
error estimation for the proposed estimator of the expected conditional
variance and show that the intractability of the posterior distribution does
not affect the performance of our approach. We use an artificial neural network
(ANN) to approximate the nonlinear conditional expectation in the
implementation of our method. We then extend our approach for dealing with the
case that the domain of experimental design parameters is continuous by
integrating the training process of the ANN into minimizing the expected
conditional variance. Through numerical experiments, we demonstrate that our
method greatly reduces the number of observation model evaluations compared
with widely used importance sampling-based approaches. This reduction is
crucial, considering the high computational cost of the observational models.
Code is available at https://github.com/vinh-tr-hoang/DOEviaPACE.
Related papers
- Variational Bayesian Optimal Experimental Design with Normalizing Flows [0.837622912636323]
Variational OED estimates a lower bound of the EIG without likelihood evaluations.
We introduce the use of normalizing flows for representing variational distributions in vOED.
We show that a composition of 4--5 layers is able to achieve lower EIG estimation bias.
arXiv Detail & Related papers (2024-04-08T14:44:21Z) - Variational Sequential Optimal Experimental Design using Reinforcement
Learning [0.0]
We introduce variational sequential Optimal Experimental Design (vsOED), a new method for optimally designing a finite sequence of experiments under a Bayesian framework and with information-gain utilities.
Our vsOED results indicate substantially improved sample efficiency and reduced number of forward model simulations compared to previous sequential design algorithms.
arXiv Detail & Related papers (2023-06-17T21:47:19Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Online Bootstrap Inference with Nonconvex Stochastic Gradient Descent
Estimator [0.0]
In this paper, we investigate the theoretical properties of gradient descent (SGD) for statistical inference in the context of convex problems.
We propose two coferential procedures which may contain multiple error minima.
arXiv Detail & Related papers (2023-06-03T22:08:10Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Heavy-tailed Streaming Statistical Estimation [58.70341336199497]
We consider the task of heavy-tailed statistical estimation given streaming $p$ samples.
We design a clipped gradient descent and provide an improved analysis under a more nuanced condition on the noise of gradients.
arXiv Detail & Related papers (2021-08-25T21:30:27Z) - Variational Refinement for Importance Sampling Using the Forward
Kullback-Leibler Divergence [77.06203118175335]
Variational Inference (VI) is a popular alternative to exact sampling in Bayesian inference.
Importance sampling (IS) is often used to fine-tune and de-bias the estimates of approximate Bayesian inference procedures.
We propose a novel combination of optimization and sampling techniques for approximate Bayesian inference.
arXiv Detail & Related papers (2021-06-30T11:00:24Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Optimal Bayesian experimental design for subsurface flow problems [77.34726150561087]
We propose a novel approach for development of chaos expansion (PCE) surrogate model for the design utility function.
This novel technique enables the derivation of a reasonable quality response surface for the targeted objective function with a computational budget comparable to several single-point evaluations.
arXiv Detail & Related papers (2020-08-10T09:42:59Z) - Unbiased MLMC stochastic gradient-based optimization of Bayesian
experimental designs [4.112293524466434]
The gradient of the expected information gain with respect to experimental design parameters is given by a nested expectation.
We introduce an unbiased Monte Carlo estimator for the gradient of the expected information gain with finite expected squared $ell$-norm and finite expected computational cost per sample.
arXiv Detail & Related papers (2020-05-18T01:02:31Z) - On Low-rank Trace Regression under General Sampling Distribution [9.699586426043885]
We show that cross-validated estimators satisfy near-optimal error bounds on general assumptions.
We also show that the cross-validated estimator outperforms the theory-inspired approach of selecting the parameter.
arXiv Detail & Related papers (2019-04-18T02:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.