Optimizing Adaptive Experiments: A Unified Approach to Regret Minimization and Best-Arm Identification
- URL: http://arxiv.org/abs/2402.10592v2
- Date: Tue, 30 Jul 2024 08:48:04 GMT
- Title: Optimizing Adaptive Experiments: A Unified Approach to Regret Minimization and Best-Arm Identification
- Authors: Chao Qin, Daniel Russo,
- Abstract summary: We propose a unified model that simultaneously accounts for within-experiment performance and post-experiment outcomes.
We show that substantial reductions in experiment duration can often be achieved with minimal impact on both within-experiment and post-experiment regret.
- Score: 9.030753181146176
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Practitioners conducting adaptive experiments often encounter two competing priorities: maximizing total welfare (or `reward') through effective treatment assignment and swiftly concluding experiments to implement population-wide treatments. Current literature addresses these priorities separately, with regret minimization studies focusing on the former and best-arm identification research on the latter. This paper bridges this divide by proposing a unified model that simultaneously accounts for within-experiment performance and post-experiment outcomes. We provide a sharp theory of optimal performance in large populations that not only unifies canonical results in the literature but also uncovers novel insights. Our theory reveals that familiar algorithms, such as the recently proposed top-two Thompson sampling algorithm, can optimize a broad class of objectives if a single scalar parameter is appropriately adjusted. In addition, we demonstrate that substantial reductions in experiment duration can often be achieved with minimal impact on both within-experiment and post-experiment regret.
Related papers
- Prediction-Guided Active Experiments [18.494123886098215]
We introduce a new framework for active experimentation, the Prediction-Guided Active Experiment (PGAE)
PGAE leverages predictions from an existing machine learning model to guide sampling and experimentation.
We show that PGAE remains efficient and attains the same semi-parametric bound under certain regularity assumptions.
arXiv Detail & Related papers (2024-11-18T20:16:24Z) - Optimal Adaptive Experimental Design for Estimating Treatment Effect [14.088972921434761]
This paper addresses the fundamental question of determining the optimal accuracy in estimating the treatment effect.
By incorporating the concept of doubly robust method into sequential experimental design, we frame the optimal estimation problem as an online bandit learning problem.
Using tools and ideas from both bandit algorithm design and adaptive statistical estimation, we propose a general low switching adaptive experiment framework.
arXiv Detail & Related papers (2024-10-07T23:22:51Z) - Adaptive Experimentation When You Can't Experiment [55.86593195947978]
This paper introduces the emphconfounded pure exploration transductive linear bandit (textttCPET-LB) problem.
Online services can employ a properly randomized encouragement that incentivizes users toward a specific treatment.
arXiv Detail & Related papers (2024-06-15T20:54:48Z) - Active Adaptive Experimental Design for Treatment Effect Estimation with Covariate Choices [7.21848268647674]
This study designs an adaptive experiment for efficiently estimating average treatment effects (ATEs)
In each round of our adaptive experiment, an experimenter samples an experimental unit, assigns a treatment, and observes the corresponding outcome immediately.
At the end of the experiment, the experimenter estimates an ATE using the gathered samples.
arXiv Detail & Related papers (2024-03-06T10:24:44Z) - Effect Size Estimation for Duration Recommendation in Online Experiments: Leveraging Hierarchical Models and Objective Utility Approaches [13.504353263032359]
The selection of the assumed effect size (AES) critically determines the duration of an experiment, and hence its accuracy and efficiency.
Traditionally, experimenters determine AES based on domain knowledge, but this method becomes impractical for online experimentation services managing numerous experiments.
We propose two solutions for data-driven AES selection in for online experimentation services.
arXiv Detail & Related papers (2023-12-20T09:34:28Z) - Adaptive Instrument Design for Indirect Experiments [48.815194906471405]
Unlike RCTs, indirect experiments estimate treatment effects by leveragingconditional instrumental variables.
In this paper we take the initial steps towards enhancing sample efficiency for indirect experiments by adaptively designing a data collection policy.
Our main contribution is a practical computational procedure that utilizes influence functions to search for an optimal data collection policy.
arXiv Detail & Related papers (2023-12-05T02:38:04Z) - Choosing a Proxy Metric from Past Experiments [54.338884612982405]
In many randomized experiments, the treatment effect of the long-term metric is often difficult or infeasible to measure.
A common alternative is to measure several short-term proxy metrics in the hope they closely track the long-term metric.
We introduce a new statistical framework to both define and construct an optimal proxy metric for use in a homogeneous population of randomized experiments.
arXiv Detail & Related papers (2023-09-14T17:43:02Z) - Adaptive Identification of Populations with Treatment Benefit in
Clinical Trials: Machine Learning Challenges and Solutions [78.31410227443102]
We study the problem of adaptively identifying patient subpopulations that benefit from a given treatment during a confirmatory clinical trial.
We propose AdaGGI and AdaGCPI, two meta-algorithms for subpopulation construction.
arXiv Detail & Related papers (2022-08-11T14:27:49Z) - Robust Sampling in Deep Learning [62.997667081978825]
Deep learning requires regularization mechanisms to reduce overfitting and improve generalization.
We address this problem by a new regularization method based on distributional robust optimization.
During the training, the selection of samples is done according to their accuracy in such a way that the worst performed samples are the ones that contribute the most in the optimization.
arXiv Detail & Related papers (2020-06-04T09:46:52Z) - Incorporating Expert Prior Knowledge into Experimental Design via
Posterior Sampling [58.56638141701966]
Experimenters can often acquire the knowledge about the location of the global optimum.
It is unknown how to incorporate the expert prior knowledge about the global optimum into Bayesian optimization.
An efficient Bayesian optimization approach has been proposed via posterior sampling on the posterior distribution of the global optimum.
arXiv Detail & Related papers (2020-02-26T01:57:36Z) - Optimal Experimental Design for Staggered Rollouts [11.187415608299075]
We study the design and analysis of experiments conducted on a set of units over multiple time periods where the starting time of the treatment may vary by unit.
We propose a new algorithm, the Precision-Guided Adaptive Experiment (PGAE) algorithm, that addresses the challenges at both the design stage and at the stage of estimating treatment effects.
arXiv Detail & Related papers (2019-11-09T19:46:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.