Admissibility of Completely Randomized Trials: A Large-Deviation Approach
- URL: http://arxiv.org/abs/2506.05329v1
- Date: Thu, 05 Jun 2025 17:58:43 GMT
- Title: Admissibility of Completely Randomized Trials: A Large-Deviation Approach
- Authors: Guido Imbens, Chao Qin, Stefan Wager,
- Abstract summary: We find that whenever there are at least three treatment arms, there exist simple adaptive designs that universally and strictly dominate non-adaptive completely randomized trials.<n>This dominance is characterized by a notion called efficiency exponent, which quantifies a design's statistical efficiency when the experimental sample is large.
- Score: 4.970364068620608
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When an experimenter has the option of running an adaptive trial, is it admissible to ignore this option and run a non-adaptive trial instead? We provide a negative answer to this question in the best-arm identification problem, where the experimenter aims to allocate measurement efforts judiciously to confidently deploy the most effective treatment arm. We find that, whenever there are at least three treatment arms, there exist simple adaptive designs that universally and strictly dominate non-adaptive completely randomized trials. This dominance is characterized by a notion called efficiency exponent, which quantifies a design's statistical efficiency when the experimental sample is large. Our analysis focuses on the class of batched arm elimination designs, which progressively eliminate underperforming arms at pre-specified batch intervals. We characterize simple sufficient conditions under which these designs universally and strictly dominate completely randomized trials. These results resolve the second open problem posed in Qin [2022].
Related papers
- Inference for Batched Adaptive Experiments [0.0]
This note suggests a BOLS test statistic for inference of treatment effects in adaptive experiments.<n>We provide simulation results comparing rejection rates in the typical case with few treatment periods and few (or many) observations per batch.
arXiv Detail & Related papers (2025-12-10T23:33:08Z) - Minimax and Bayes Optimal Best-arm Identification: Adaptive Experimental Design for Treatment Choice [10.470114319701576]
We consider an adaptive procedure consisting of a treatment-allocation phase followed by a treatment-choice phase.<n>We design an adaptive experiment for this setup to efficiently identify the best treatment arm.<n>We prove that this single design is simultaneously minimax and Bayes optimal for the simple regret.
arXiv Detail & Related papers (2025-06-30T16:11:44Z) - Simulation-Based Inference for Adaptive Experiments [38.841210420855276]
Multi-arm bandit experimental designs are increasingly being adopted over standard randomized trials.<n>We propose a simulation-based approach for conducting hypothesis tests and constructing confidence intervals for arm specific means.<n>Our results show that our approach achieves the desired coverage while reducing confidence interval widths by up to 50%, with drastic improvements for arms not targeted by the design.
arXiv Detail & Related papers (2025-06-03T13:46:59Z) - Optimal Adaptive Experimental Design for Estimating Treatment Effect [14.088972921434761]
This paper addresses the fundamental question of determining the optimal accuracy in estimating the treatment effect.
By incorporating the concept of doubly robust method into sequential experimental design, we frame the optimal estimation problem as an online bandit learning problem.
Using tools and ideas from both bandit algorithm design and adaptive statistical estimation, we propose a general low switching adaptive experiment framework.
arXiv Detail & Related papers (2024-10-07T23:22:51Z) - Adaptive Experimentation When You Can't Experiment [55.86593195947978]
This paper introduces the emphconfounded pure exploration transductive linear bandit (textttCPET-LB) problem.
Online services can employ a properly randomized encouragement that incentivizes users toward a specific treatment.
arXiv Detail & Related papers (2024-06-15T20:54:48Z) - Optimizing Adaptive Experiments: A Unified Approach to Regret Minimization and Best-Arm Identification [9.030753181146176]
We propose a unified model that simultaneously accounts for within-experiment performance and post-experiment outcomes.
We show that substantial reductions in experiment duration can often be achieved with minimal impact on both within-experiment and post-experiment regret.
arXiv Detail & Related papers (2024-02-16T11:27:48Z) - Best Arm Identification with Fixed Budget: A Large Deviation Perspective [54.305323903582845]
We present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
In particular, we present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
arXiv Detail & Related papers (2023-12-19T13:17:43Z) - MetaRF: Differentiable Random Forest for Reaction Yield Prediction with
a Few Trails [58.47364143304643]
In this paper, we focus on the reaction yield prediction problem.
We first put forth MetaRF, an attention-based differentiable random forest model specially designed for the few-shot yield prediction.
To improve the few-shot learning performance, we further introduce a dimension-reduction based sampling method.
arXiv Detail & Related papers (2022-08-22T06:40:13Z) - Adaptive Experimentation in the Presence of Exogenous Nonstationary
Variation [10.66863856524397]
Multi-armed bandit algorithms can enhance efficiency by dynamically allocating measurement effort towards higher performing arms.
We propose deconfounded Thompson sampling (DTS), a more robust variant of the prominent Thompson sampling algorithm.
We show that a deconfounded variant of the popular upper confidence bound algorithm can fail completely.
arXiv Detail & Related papers (2022-02-18T06:09:04Z) - Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated
Label Mixing [104.630875328668]
Mixup scheme suggests mixing a pair of samples to create an augmented training sample.
We present a novel, yet simple Mixup-variant that captures the best of both worlds.
arXiv Detail & Related papers (2021-12-16T11:27:48Z) - Robust Sampling in Deep Learning [62.997667081978825]
Deep learning requires regularization mechanisms to reduce overfitting and improve generalization.
We address this problem by a new regularization method based on distributional robust optimization.
During the training, the selection of samples is done according to their accuracy in such a way that the worst performed samples are the ones that contribute the most in the optimization.
arXiv Detail & Related papers (2020-06-04T09:46:52Z) - Noisy Adaptive Group Testing using Bayesian Sequential Experimental
Design [63.48989885374238]
When the infection prevalence of a disease is low, Dorfman showed 80 years ago that testing groups of people can prove more efficient than testing people individually.
Our goal in this paper is to propose new group testing algorithms that can operate in a noisy setting.
arXiv Detail & Related papers (2020-04-26T23:41:33Z) - Optimal Experimental Design for Staggered Rollouts [11.187415608299075]
We study the design and analysis of experiments conducted on a set of units over multiple time periods where the starting time of the treatment may vary by unit.
We propose a new algorithm, the Precision-Guided Adaptive Experiment (PGAE) algorithm, that addresses the challenges at both the design stage and at the stage of estimating treatment effects.
arXiv Detail & Related papers (2019-11-09T19:46:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.