Stochastic Gradient Bayesian Optimal Experimental Designs for
Simulation-based Inference
- URL: http://arxiv.org/abs/2306.15731v1
- Date: Tue, 27 Jun 2023 18:15:41 GMT
- Title: Stochastic Gradient Bayesian Optimal Experimental Designs for
Simulation-based Inference
- Authors: Vincent D. Zaballa and Elliot E. Hui
- Abstract summary: We establish a crucial connection between ratio-based SBI inference algorithms and gradient-based variational inference by leveraging mutual information bounds.
This connection allows us to extend the simultaneous optimization of experimental designs and amortized inference functions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Simulation-based inference (SBI) methods tackle complex scientific models
with challenging inverse problems. However, SBI models often face a significant
hurdle due to their non-differentiable nature, which hampers the use of
gradient-based optimization techniques. Bayesian Optimal Experimental Design
(BOED) is a powerful approach that aims to make the most efficient use of
experimental resources for improved inferences. While stochastic gradient BOED
methods have shown promising results in high-dimensional design problems, they
have mostly neglected the integration of BOED with SBI due to the difficult
non-differentiable property of many SBI simulators. In this work, we establish
a crucial connection between ratio-based SBI inference algorithms and
stochastic gradient-based variational inference by leveraging mutual
information bounds. This connection allows us to extend BOED to SBI
applications, enabling the simultaneous optimization of experimental designs
and amortized inference functions. We demonstrate our approach on a simple
linear model and offer implementation details for practitioners.
Related papers
- Bayesian Experimental Design via Contrastive Diffusions [2.2186678387006435]
Experimental Design (BOED) is a powerful tool to reduce the cost of running a sequence of experiments.
We introduce an it expected posterior distribution with cost-effective properties and provide a tractable access to the EIG contrast.
By incorporating generative models into the BOED framework, we expand its scope and its use in scenarios that were previously impractical.
arXiv Detail & Related papers (2024-10-15T17:53:07Z) - A Comprehensive Guide to Simulation-based Inference in Computational Biology [5.333122501732079]
This paper provides comprehensive guidelines for deciding between SBI approaches for complex biological models.
We apply the guidelines to two agent-based models that describe cellular dynamics using real-world data.
Our study unveils a critical insight: while neural SBI methods demand significantly fewer simulations for inference results, they tend to yield biased estimations.
arXiv Detail & Related papers (2024-09-29T12:04:03Z) - Enhanced Bayesian Optimization via Preferential Modeling of Abstract
Properties [49.351577714596544]
We propose a human-AI collaborative Bayesian framework to incorporate expert preferences about unmeasured abstract properties into surrogate modeling.
We provide an efficient strategy that can also handle any incorrect/misleading expert bias in preferential judgments.
arXiv Detail & Related papers (2024-02-27T09:23:13Z) - Consistency Models for Scalable and Fast Simulation-Based Inference [9.27488642055461]
We present consistency models for posterior estimation (CMPE), a new conditional sampler for simulation-based inference ( SBI)
CMPE essentially distills a continuous probability flow and enables rapid few-shot inference with an unconstrained architecture.
Our empirical evaluation demonstrates that CMPE not only outperforms current state-of-the-art algorithms on hard low-dimensional benchmarks, but also achieves competitive performance with much faster sampling speed.
arXiv Detail & Related papers (2023-12-09T02:14:12Z) - Flow Matching for Scalable Simulation-Based Inference [20.182658224439688]
Flow matching posterior estimation (FMPE) is a technique for simulation-based inference ( SBI) using continuous normalizing flows.
We show that FMPE achieves competitive performance on an established SBI benchmark, and then demonstrate its improved scalability on a challenging scientific problem.
arXiv Detail & Related papers (2023-05-26T18:00:01Z) - Online simulator-based experimental design for cognitive model selection [74.76661199843284]
We propose BOSMOS: an approach to experimental design that can select between computational models without tractable likelihoods.
In simulated experiments, we demonstrate that the proposed BOSMOS technique can accurately select models in up to 2 orders of magnitude less time than existing LFI alternatives.
arXiv Detail & Related papers (2023-03-03T21:41:01Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Validation Diagnostics for SBI algorithms based on Normalizing Flows [55.41644538483948]
This work proposes easy to interpret validation diagnostics for multi-dimensional conditional (posterior) density estimators based on NF.
It also offers theoretical guarantees based on results of local consistency.
This work should help the design of better specified models or drive the development of novel SBI-algorithms.
arXiv Detail & Related papers (2022-11-17T15:48:06Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - Design Amortization for Bayesian Optimal Experimental Design [70.13948372218849]
We build off of successful variational approaches, which optimize a parameterized variational model with respect to bounds on the expected information gain (EIG)
We present a novel neural architecture that allows experimenters to optimize a single variational model that can estimate the EIG for potentially infinitely many designs.
arXiv Detail & Related papers (2022-10-07T02:12:34Z) - USCO-Solver: Solving Undetermined Stochastic Combinatorial Optimization
Problems [9.015720257837575]
We consider the regression between spaces, aiming to infer high-quality optimization solutions from samples of input-solution pairs.
For learning foundations, we present learning-error analysis under the PAC-Bayesian framework.
We obtain highly encouraging experimental results for several classic problems on both synthetic and real-world datasets.
arXiv Detail & Related papers (2021-07-15T17:59:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.