Bayesian Optimal Experimental Design for Simulator Models of Cognition
- URL: http://arxiv.org/abs/2110.15632v1
- Date: Fri, 29 Oct 2021 09:04:01 GMT
- Title: Bayesian Optimal Experimental Design for Simulator Models of Cognition
- Authors: Simon Valentin, Steven Kleinegesse, Neil R. Bramley, Michael U.
Gutmann, Christopher G. Lucas
- Abstract summary: We combine recent advances in BOED and approximate inference for intractable models to find optimal experimental designs.
Our simulation experiments on multi-armed bandit tasks show that our method results in improved model discrimination and parameter estimation.
- Score: 14.059933880568908
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bayesian optimal experimental design (BOED) is a methodology to identify
experiments that are expected to yield informative data. Recent work in
cognitive science considered BOED for computational models of human behavior
with tractable and known likelihood functions. However, tractability often
comes at the cost of realism; simulator models that can capture the richness of
human behavior are often intractable. In this work, we combine recent advances
in BOED and approximate inference for intractable models, using
machine-learning methods to find optimal experimental designs, approximate
sufficient summary statistics and amortized posterior distributions. Our
simulation experiments on multi-armed bandit tasks show that our method results
in improved model discrimination and parameter estimation, as compared to
experimental designs commonly used in the literature.
Related papers
- Supervised Score-Based Modeling by Gradient Boosting [49.556736252628745]
We propose a Supervised Score-based Model (SSM) which can be viewed as a gradient boosting algorithm combining score matching.
We provide a theoretical analysis of learning and sampling for SSM to balance inference time and prediction accuracy.
Our model outperforms existing models in both accuracy and inference time.
arXiv Detail & Related papers (2024-11-02T07:06:53Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - On Least Square Estimation in Softmax Gating Mixture of Experts [78.3687645289918]
We investigate the performance of the least squares estimators (LSE) under a deterministic MoE model.
We establish a condition called strong identifiability to characterize the convergence behavior of various types of expert functions.
Our findings have important practical implications for expert selection.
arXiv Detail & Related papers (2024-02-05T12:31:18Z) - Designing Optimal Behavioral Experiments Using Machine Learning [8.759299724881219]
We provide a tutorial on leveraging recent advances in BOED and machine learning to find optimal experiments for any kind of model.
We consider theories of how people balance exploration and exploitation in multi-armed bandit decision-making tasks.
As compared to experimental designs commonly used in the literature, we show that our optimal designs more efficiently determine which of a set of models best account for individual human behavior.
arXiv Detail & Related papers (2023-05-12T18:24:30Z) - Online simulator-based experimental design for cognitive model selection [74.76661199843284]
We propose BOSMOS: an approach to experimental design that can select between computational models without tractable likelihoods.
In simulated experiments, we demonstrate that the proposed BOSMOS technique can accurately select models in up to 2 orders of magnitude less time than existing LFI alternatives.
arXiv Detail & Related papers (2023-03-03T21:41:01Z) - Design Amortization for Bayesian Optimal Experimental Design [70.13948372218849]
We build off of successful variational approaches, which optimize a parameterized variational model with respect to bounds on the expected information gain (EIG)
We present a novel neural architecture that allows experimenters to optimize a single variational model that can estimate the EIG for potentially infinitely many designs.
arXiv Detail & Related papers (2022-10-07T02:12:34Z) - Cognitive simulation models for inertial confinement fusion: Combining
simulation and experimental data [0.0]
Researchers rely heavily on computer simulations to explore the design space in search of high-performing implosions.
For more effective design and investigation, simulations require input from past experimental data to better predict future performance.
We describe a cognitive simulation method for combining simulation and experimental data into a common, predictive model.
arXiv Detail & Related papers (2021-03-19T02:00:14Z) - Models, Pixels, and Rewards: Evaluating Design Trade-offs in Visual
Model-Based Reinforcement Learning [109.74041512359476]
We study a number of design decisions for the predictive model in visual MBRL algorithms.
We find that a range of design decisions that are often considered crucial, such as the use of latent spaces, have little effect on task performance.
We show how this phenomenon is related to exploration and how some of the lower-scoring models on standard benchmarks will perform the same as the best-performing models when trained on the same training data.
arXiv Detail & Related papers (2020-12-08T18:03:21Z) - Amortized Bayesian model comparison with evidential deep learning [0.12314765641075436]
We propose a novel method for performing Bayesian model comparison using specialized deep learning architectures.
Our method is purely simulation-based and circumvents the step of explicitly fitting all alternative models under consideration to each observed dataset.
We show that our method achieves excellent results in terms of accuracy, calibration, and efficiency across the examples considered in this work.
arXiv Detail & Related papers (2020-04-22T15:15:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.