Metrics for Bayesian Optimal Experiment Design under Model
Misspecification
- URL: http://arxiv.org/abs/2304.07949v1
- Date: Mon, 17 Apr 2023 02:13:20 GMT
- Title: Metrics for Bayesian Optimal Experiment Design under Model
Misspecification
- Authors: Tommie A. Catanach and Niladri Das
- Abstract summary: The utility function defines the objective of the experiment where a common utility function is the information gain.
This article introduces an expanded framework for this process, where we go beyond the traditional Expected Information Gain criteria.
The functionality of the framework is showcased through its application to a scenario involving a linearized spring mass damper system and an F-16 model.
- Score: 3.04585143845864
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The conventional approach to Bayesian decision-theoretic experiment design
involves searching over possible experiments to select a design that maximizes
the expected value of a specified utility function. The expectation is over the
joint distribution of all unknown variables implied by the statistical model
that will be used to analyze the collected data. The utility function defines
the objective of the experiment where a common utility function is the
information gain. This article introduces an expanded framework for this
process, where we go beyond the traditional Expected Information Gain criteria
and introduce the Expected General Information Gain which measures robustness
to the model discrepancy and Expected Discriminatory Information as a criterion
to quantify how well an experiment can detect model discrepancy. The
functionality of the framework is showcased through its application to a
scenario involving a linearized spring mass damper system and an F-16 model
where the model discrepancy is taken into account while doing Bayesian optimal
experiment design.
Related papers
- Influence Functions for Scalable Data Attribution in Diffusion Models [52.92223039302037]
Diffusion models have led to significant advancements in generative modelling.
Yet their widespread adoption poses challenges regarding data attribution and interpretability.
In this paper, we aim to help address such challenges by developing an textitinfluence functions framework.
arXiv Detail & Related papers (2024-10-17T17:59:02Z) - A Likelihood-Free Approach to Goal-Oriented Bayesian Optimal Experimental Design [0.0]
We introduce LF-GO-OED (likelihood-free goal-oriented optimal experimental design), a computational method for conducting GO-OED with nonlinear observation and prediction models.
It is specifically designed to accommodate implicit models, where the likelihood is intractable.
The method is validated on benchmark problems with existing methods, and demonstrated on scientific applications of epidemiology and neural science.
arXiv Detail & Related papers (2024-08-18T19:45:49Z) - Bayesian Model Selection via Mean-Field Variational Approximation [10.433170683584994]
We study the non-asymptotic properties of mean-field (MF) inference under the Bayesian framework.
We show a Bernstein von-Mises (BvM) theorem for the variational distribution from MF under possible model mis-specification.
arXiv Detail & Related papers (2023-12-17T04:48:25Z) - On the Properties and Estimation of Pointwise Mutual Information Profiles [49.877314063833296]
The pointwise mutual information profile, or simply profile, is the distribution of pointwise mutual information for a given pair of random variables.
We introduce a novel family of distributions, Bend and Mix Models, for which the profile can be accurately estimated using Monte Carlo methods.
arXiv Detail & Related papers (2023-10-16T10:02:24Z) - Statistically Efficient Bayesian Sequential Experiment Design via
Reinforcement Learning with Cross-Entropy Estimators [15.461927416747582]
Reinforcement learning can learn amortised design policies for designing sequences of experiments.
We propose the use of an alternative estimator based on the cross-entropy of the joint model distribution and a flexible proposal distribution.
Our method overcomes the exponential-sample complexity of previous approaches and provide more accurate estimates of high EIG values.
arXiv Detail & Related papers (2023-05-29T00:35:52Z) - Online simulator-based experimental design for cognitive model selection [74.76661199843284]
We propose BOSMOS: an approach to experimental design that can select between computational models without tractable likelihoods.
In simulated experiments, we demonstrate that the proposed BOSMOS technique can accurately select models in up to 2 orders of magnitude less time than existing LFI alternatives.
arXiv Detail & Related papers (2023-03-03T21:41:01Z) - Design Amortization for Bayesian Optimal Experimental Design [70.13948372218849]
We build off of successful variational approaches, which optimize a parameterized variational model with respect to bounds on the expected information gain (EIG)
We present a novel neural architecture that allows experimenters to optimize a single variational model that can estimate the EIG for potentially infinitely many designs.
arXiv Detail & Related papers (2022-10-07T02:12:34Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Sequential Bayesian Experimental Design for Implicit Models via Mutual
Information [12.68659360172393]
A class of models of particular interest for the natural and medical sciences are implicit models.
We devise a novel sequential design framework for parameter estimation that uses the Mutual Information (MI) between model parameters and simulated data as a utility function.
We find that our framework is efficient for the various implicit models tested, yielding accurate parameter estimates after only a few iterations.
arXiv Detail & Related papers (2020-03-20T16:52:10Z) - Bayesian Experimental Design for Implicit Models by Mutual Information
Neural Estimation [16.844481439960663]
Implicit models, where the data-generation distribution is intractable but sampling is possible, are ubiquitous in the natural sciences.
A fundamental question is how to design experiments so that the collected data are most useful.
For implicit models, however, this approach is severely hampered by the high computational cost of computing posteriors.
We show that training a neural network to maximise a lower bound on MI allows us to jointly determine the optimal design and the posterior.
arXiv Detail & Related papers (2020-02-19T12:09:42Z) - Decision-Making with Auto-Encoding Variational Bayes [71.44735417472043]
We show that a posterior approximation distinct from the variational distribution should be used for making decisions.
Motivated by these theoretical results, we propose learning several approximate proposals for the best model.
In addition to toy examples, we present a full-fledged case study of single-cell RNA sequencing.
arXiv Detail & Related papers (2020-02-17T19:23:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.