Supercharging Simulation-Based Inference for Bayesian Optimal Experimental Design
- URL: http://arxiv.org/abs/2602.06900v1
- Date: Fri, 06 Feb 2026 17:50:00 GMT
- Title: Supercharging Simulation-Based Inference for Bayesian Optimal Experimental Design
- Authors: Samuel Klein, Willie Neiswanger, Daniel Ratner, Michael Kagan, Sean Gasiorowski,
- Abstract summary: Bayesian optimal experimental (BOED) seeks to maximize the expected information gain of experiments.<n>We show that the EIG admits multiple formulations which can directly leverage modern SBI density estimators.<n>We identify a key bottleneck of gradient based EIG design and show that a simple multi-start parallel gradient ascent procedure can substantially improve reliability and performance.
- Score: 12.68772511482115
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bayesian optimal experimental design (BOED) seeks to maximize the expected information gain (EIG) of experiments. This requires a likelihood estimate, which in many settings is intractable. Simulation-based inference (SBI) provides powerful tools for this regime. However, existing work explicitly connecting SBI and BOED is restricted to a single contrastive EIG bound. We show that the EIG admits multiple formulations which can directly leverage modern SBI density estimators, encompassing neural posterior, likelihood, and ratio estimation. Building on this perspective, we define a novel EIG estimator using neural likelihood estimation. Further, we identify optimization as a key bottleneck of gradient based EIG maximization and show that a simple multi-start parallel gradient ascent procedure can substantially improve reliability and performance. With these innovations, our SBI-based BOED methods are able to match or outperform by up to $22\%$ existing state-of-the-art approaches across standard BOED benchmarks.
Related papers
- How to Set the Learning Rate for Large-Scale Pre-training? [73.03133634525635]
We formalize this investigation into two distinct research paradigms: Fitting and Transfer.<n>Within the Fitting Paradigm, we introduce a Scaling Law for search factor, effectively reducing the search complexity from O(n3) to O(n*C_D*C_) via predictive modeling.<n>We extend the principles of $$Transfer to the Mixture of Experts (MoE) architecture, broadening its applicability to encompass model depth, weight decay, and token horizons.
arXiv Detail & Related papers (2026-01-08T15:55:13Z) - Optimizing Likelihoods via Mutual Information: Bridging Simulation-Based Inference and Bayesian Optimal Experimental Design [0.0]
gradient-based BOED methods have been proposed as an alternative to Bayesian optimization and other experimental designs.<n>We show a link via mutual information bounds between SBI and gradient-based variational inference methods that permits BOED to be used in SBI applications as SBI-BOED.<n>We compare this approach on SBI-based models in real-world simulators in epidemiology and biology, showing notable improvements in inference.
arXiv Detail & Related papers (2025-02-11T22:58:18Z) - Bayesian Experimental Design via Contrastive Diffusions [2.2186678387006435]
Experimental Design (BOED) is a powerful tool to reduce the cost of running a sequence of experiments.<n>We introduce a pooled gradient distribution with cost-effective sampling properties and provide a tractable access to the EIG contrast posterior via a new EIG expression.<n>By incorporating generative models into the BOED framework, we expand its scope and its use in scenarios that were impractical.
arXiv Detail & Related papers (2024-10-15T17:53:07Z) - On Estimating the Gradient of the Expected Information Gain in Bayesian
Experimental Design [5.874142059884521]
We develop methods for estimating the gradient of EIG, which combined with gradient descent algorithms, result in efficient optimization of EIG.
Based on this, we propose two methods for estimating the EIG gradient, UEEG-MCMC that leverages posterior samples to estimate the EIG gradient, and BEEG-AP that focuses on achieving high simulation efficiency by repeatedly using parameter samples.
arXiv Detail & Related papers (2023-08-19T02:48:44Z) - Stochastic Gradient Bayesian Optimal Experimental Designs for
Simulation-based Inference [0.0]
We establish a crucial connection between ratio-based SBI inference algorithms and gradient-based variational inference by leveraging mutual information bounds.
This connection allows us to extend the simultaneous optimization of experimental designs and amortized inference functions.
arXiv Detail & Related papers (2023-06-27T18:15:41Z) - Validation Diagnostics for SBI algorithms based on Normalizing Flows [55.41644538483948]
This work proposes easy to interpret validation diagnostics for multi-dimensional conditional (posterior) density estimators based on NF.
It also offers theoretical guarantees based on results of local consistency.
This work should help the design of better specified models or drive the development of novel SBI-algorithms.
arXiv Detail & Related papers (2022-11-17T15:48:06Z) - Design Amortization for Bayesian Optimal Experimental Design [70.13948372218849]
We build off of successful variational approaches, which optimize a parameterized variational model with respect to bounds on the expected information gain (EIG)
We present a novel neural architecture that allows experimenters to optimize a single variational model that can estimate the EIG for potentially infinitely many designs.
arXiv Detail & Related papers (2022-10-07T02:12:34Z) - Robust Expected Information Gain for Optimal Bayesian Experimental
Design Using Ambiguity Sets [0.0]
We define and analyze emphrobust expected information gain (REIG)
REIG is a modification of the objective in EIG by minimizing an affine relaxation of EIG over an ambiguity set of perturbed distributions.
We show that, when combined with a sampling-based approach to estimating EIG, REIG corresponds to a log-sum-exp' stabilization of the samples used to estimate EIG.
arXiv Detail & Related papers (2022-05-20T01:07:41Z) - Value-Function-based Sequential Minimization for Bi-level Optimization [52.39882976848064]
gradient-based Bi-Level Optimization (BLO) methods have been widely applied to handle modern learning tasks.
There are almost no gradient-based methods able to solve BLO in challenging scenarios, such as BLO with functional constraints and pessimistic BLO.
We provide Bi-level Value-Function-based Sequential Minimization (BVFSM) to address the above issues.
arXiv Detail & Related papers (2021-10-11T03:13:39Z) - Principled Exploration via Optimistic Bootstrapping and Backward
Induction [84.78836146128238]
We propose a principled exploration method for Deep Reinforcement Learning (DRL) through Optimistic Bootstrapping and Backward Induction (OB2I)
OB2I constructs a general-purpose UCB-bonus through non-parametric bootstrap in DRL.
We build theoretical connections between the proposed UCB-bonus and the LSVI-UCB in a linear setting.
arXiv Detail & Related papers (2021-05-13T01:15:44Z) - Learnable Bernoulli Dropout for Bayesian Deep Learning [53.79615543862426]
Learnable Bernoulli dropout (LBD) is a new model-agnostic dropout scheme that considers the dropout rates as parameters jointly optimized with other model parameters.
LBD leads to improved accuracy and uncertainty estimates in image classification and semantic segmentation.
arXiv Detail & Related papers (2020-02-12T18:57:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.