Efficient Adaptive Experimental Design for Average Treatment Effect
Estimation
- URL: http://arxiv.org/abs/2002.05308v4
- Date: Tue, 26 Oct 2021 10:01:31 GMT
- Title: Efficient Adaptive Experimental Design for Average Treatment Effect
Estimation
- Authors: Masahiro Kato, Takuya Ishihara, Junya Honda, Yusuke Narita
- Abstract summary: We propose an algorithm for efficient experiments with estimators constructed from dependent samples.
To justify our proposed approach, we provide finite and infinite sample analyses.
- Score: 18.027128141189355
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of many scientific experiments including A/B testing is to estimate
the average treatment effect (ATE), which is defined as the difference between
the expected outcomes of two or more treatments. In this paper, we consider a
situation where an experimenter can assign a treatment to research subjects
sequentially. In adaptive experimental design, the experimenter is allowed to
change the probability of assigning a treatment using past observations for
estimating the ATE efficiently. However, with this approach, it is difficult to
apply a standard statistical method to construct an estimator because the
observations are not independent and identically distributed. We thus propose
an algorithm for efficient experiments with estimators constructed from
dependent samples. We also introduce a sequential testing framework using the
proposed estimator. To justify our proposed approach, we provide finite and
infinite sample analyses. Finally, we experimentally show that the proposed
algorithm exhibits preferable performance.
Related papers
- Prediction-Guided Active Experiments [18.494123886098215]
We introduce a new framework for active experimentation, the Prediction-Guided Active Experiment (PGAE)
PGAE leverages predictions from an existing machine learning model to guide sampling and experimentation.
We show that PGAE remains efficient and attains the same semi-parametric bound under certain regularity assumptions.
arXiv Detail & Related papers (2024-11-18T20:16:24Z) - Optimal Adaptive Experimental Design for Estimating Treatment Effect [14.088972921434761]
This paper addresses the fundamental question of determining the optimal accuracy in estimating the treatment effect.
By incorporating the concept of doubly robust method into sequential experimental design, we frame the optimal estimation problem as an online bandit learning problem.
Using tools and ideas from both bandit algorithm design and adaptive statistical estimation, we propose a general low switching adaptive experiment framework.
arXiv Detail & Related papers (2024-10-07T23:22:51Z) - Adaptive Experimentation When You Can't Experiment [55.86593195947978]
This paper introduces the emphconfounded pure exploration transductive linear bandit (textttCPET-LB) problem.
Online services can employ a properly randomized encouragement that incentivizes users toward a specific treatment.
arXiv Detail & Related papers (2024-06-15T20:54:48Z) - Active Adaptive Experimental Design for Treatment Effect Estimation with Covariate Choices [7.21848268647674]
This study designs an adaptive experiment for efficiently estimating average treatment effects (ATEs)
In each round of our adaptive experiment, an experimenter samples an experimental unit, assigns a treatment, and observes the corresponding outcome immediately.
At the end of the experiment, the experimenter estimates an ATE using the gathered samples.
arXiv Detail & Related papers (2024-03-06T10:24:44Z) - Efficient adjustment for complex covariates: Gaining efficiency with
DOPE [56.537164957672715]
We propose a framework that accommodates adjustment for any subset of information expressed by the covariates.
Based on our theoretical results, we propose the Debiased Outcome-adapted Propensity Estorimator (DOPE) for efficient estimation of the average treatment effect (ATE)
Our results show that the DOPE provides an efficient and robust methodology for ATE estimation in various observational settings.
arXiv Detail & Related papers (2024-02-20T13:02:51Z) - Adaptive Instrument Design for Indirect Experiments [48.815194906471405]
Unlike RCTs, indirect experiments estimate treatment effects by leveragingconditional instrumental variables.
In this paper we take the initial steps towards enhancing sample efficiency for indirect experiments by adaptively designing a data collection policy.
Our main contribution is a practical computational procedure that utilizes influence functions to search for an optimal data collection policy.
arXiv Detail & Related papers (2023-12-05T02:38:04Z) - Choosing a Proxy Metric from Past Experiments [54.338884612982405]
In many randomized experiments, the treatment effect of the long-term metric is often difficult or infeasible to measure.
A common alternative is to measure several short-term proxy metrics in the hope they closely track the long-term metric.
We introduce a new statistical framework to both define and construct an optimal proxy metric for use in a homogeneous population of randomized experiments.
arXiv Detail & Related papers (2023-09-14T17:43:02Z) - A Double Machine Learning Approach to Combining Experimental and Observational Data [59.29868677652324]
We propose a double machine learning approach to combine experimental and observational studies.
Our framework tests for violations of external validity and ignorability under milder assumptions.
arXiv Detail & Related papers (2023-07-04T02:53:11Z) - Scalable method for Bayesian experimental design without integrating
over posterior distribution [0.0]
We address the computational efficiency in solving the A-optimal Bayesian design of experiments problems.
A-optimality is a widely used and easy-to-interpret criterion for Bayesian experimental design.
This study presents a novel likelihood-free approach to the A-optimal experimental design.
arXiv Detail & Related papers (2023-06-30T12:40:43Z) - Predicting Performance for Natural Language Processing Tasks [128.34208911925424]
We build regression models to predict the evaluation score of an NLP experiment given the experimental settings as input.
Experimenting on 9 different NLP tasks, we find that our predictors can produce meaningful predictions over unseen languages and different modeling architectures.
arXiv Detail & Related papers (2020-05-02T16:02:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.