Robust multi-stage model-based design of optimal experiments for
nonlinear estimation
- URL: http://arxiv.org/abs/2011.06042v2
- Date: Thu, 2 Sep 2021 15:32:26 GMT
- Title: Robust multi-stage model-based design of optimal experiments for
nonlinear estimation
- Authors: Anwesh Reddy Gottu Mukkula, Michal Mate\'a\v{s}, Miroslav Fikar,
Radoslav Paulen
- Abstract summary: We study approaches to robust model-based design of experiments in the context of maximum-likelihood estimation.
We propose a novel methodology based on multi-stage robust optimization.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study approaches to robust model-based design of experiments in the
context of maximum-likelihood estimation. These approaches provide
robustification of model-based methodologies for the design of optimal
experiments by accounting for the effect of the parametric uncertainty. We
study the problem of robust optimal design of experiments in the framework of
nonlinear least-squares parameter estimation using linearized confidence
regions. We investigate several well-known robustification frameworks in this
respect and propose a novel methodology based on multi-stage robust
optimization. The proposed methodology aims at problems, where the experiments
are designed sequentially with a possibility of re-estimation in-between the
experiments. The multi-stage formalism aids in identifying experiments that are
better conducted in the early phase of experimentation, where parameter
knowledge is poor. We demonstrate the findings and effectiveness of the
proposed methodology using four case studies of varying complexity.
Related papers
- The Power of Adaptivity in Experimental Design [14.088972921434761]
This paper addresses the fundamental question of determining the optimal accuracy in estimating the treatment effect.
By incorporating the concept of doubly robust method into sequential experimental design, we frame the optimal estimation problem as an online bandit learning problem.
Using tools and ideas from both bandit algorithm design and adaptive statistical estimation, we propose a general low switching adaptive experiment framework.
arXiv Detail & Related papers (2024-10-07T23:22:51Z) - Deep Optimal Experimental Design for Parameter Estimation Problems [4.097001355074171]
We investigate a new experimental design methodology that uses deep learning.
We show that the training of a network as a Likelihood Free Estor can be used to significantly simplify the design process.
Deep design improves the quality of the recovery process for parameter estimation problems.
arXiv Detail & Related papers (2024-06-20T05:13:33Z) - Globally-Optimal Greedy Experiment Selection for Active Sequential
Estimation [1.1530723302736279]
We study the problem of active sequential estimation, which involves adaptively selecting experiments for sequentially collected data.
The goal is to design experiment selection rules for more accurate model estimation.
We propose a class of greedy experiment selection methods and provide statistical analysis for the maximum likelihood.
arXiv Detail & Related papers (2024-02-13T17:09:29Z) - Effect Size Estimation for Duration Recommendation in Online Experiments: Leveraging Hierarchical Models and Objective Utility Approaches [13.504353263032359]
The selection of the assumed effect size (AES) critically determines the duration of an experiment, and hence its accuracy and efficiency.
Traditionally, experimenters determine AES based on domain knowledge, but this method becomes impractical for online experimentation services managing numerous experiments.
We propose two solutions for data-driven AES selection in for online experimentation services.
arXiv Detail & Related papers (2023-12-20T09:34:28Z) - DiscoBAX: Discovery of Optimal Intervention Sets in Genomic Experiment
Design [61.48963555382729]
We propose DiscoBAX as a sample-efficient method for maximizing the rate of significant discoveries per experiment.
We provide theoretical guarantees of approximate optimality under standard assumptions, and conduct a comprehensive experimental evaluation.
arXiv Detail & Related papers (2023-12-07T06:05:39Z) - Adaptive Instrument Design for Indirect Experiments [48.815194906471405]
Unlike RCTs, indirect experiments estimate treatment effects by leveragingconditional instrumental variables.
In this paper we take the initial steps towards enhancing sample efficiency for indirect experiments by adaptively designing a data collection policy.
Our main contribution is a practical computational procedure that utilizes influence functions to search for an optimal data collection policy.
arXiv Detail & Related papers (2023-12-05T02:38:04Z) - A Double Machine Learning Approach to Combining Experimental and Observational Data [59.29868677652324]
We propose a double machine learning approach to combine experimental and observational studies.
Our framework tests for violations of external validity and ignorability under milder assumptions.
arXiv Detail & Related papers (2023-07-04T02:53:11Z) - Online simulator-based experimental design for cognitive model selection [74.76661199843284]
We propose BOSMOS: an approach to experimental design that can select between computational models without tractable likelihoods.
In simulated experiments, we demonstrate that the proposed BOSMOS technique can accurately select models in up to 2 orders of magnitude less time than existing LFI alternatives.
arXiv Detail & Related papers (2023-03-03T21:41:01Z) - Design Amortization for Bayesian Optimal Experimental Design [70.13948372218849]
We build off of successful variational approaches, which optimize a parameterized variational model with respect to bounds on the expected information gain (EIG)
We present a novel neural architecture that allows experimenters to optimize a single variational model that can estimate the EIG for potentially infinitely many designs.
arXiv Detail & Related papers (2022-10-07T02:12:34Z) - Bayesian Optimal Experimental Design for Simulator Models of Cognition [14.059933880568908]
We combine recent advances in BOED and approximate inference for intractable models to find optimal experimental designs.
Our simulation experiments on multi-armed bandit tasks show that our method results in improved model discrimination and parameter estimation.
arXiv Detail & Related papers (2021-10-29T09:04:01Z) - Learning the Truth From Only One Side of the Story [58.65439277460011]
We focus on generalized linear models and show that without adjusting for this sampling bias, the model may converge suboptimally or even fail to converge to the optimal solution.
We propose an adaptive approach that comes with theoretical guarantees and show that it outperforms several existing methods empirically.
arXiv Detail & Related papers (2020-06-08T18:20:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.