Adaptive Instrument Design for Indirect Experiments
- URL: http://arxiv.org/abs/2312.02438v1
- Date: Tue, 5 Dec 2023 02:38:04 GMT
- Title: Adaptive Instrument Design for Indirect Experiments
- Authors: Yash Chandak, Shiv Shankar, Vasilis Syrgkanis, Emma Brunskill
- Abstract summary: Unlike RCTs, indirect experiments estimate treatment effects by leveragingconditional instrumental variables.
In this paper we take the initial steps towards enhancing sample efficiency for indirect experiments by adaptively designing a data collection policy.
Our main contribution is a practical computational procedure that utilizes influence functions to search for an optimal data collection policy.
- Score: 48.815194906471405
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Indirect experiments provide a valuable framework for estimating treatment
effects in situations where conducting randomized control trials (RCTs) is
impractical or unethical. Unlike RCTs, indirect experiments estimate treatment
effects by leveraging (conditional) instrumental variables, enabling estimation
through encouragement and recommendation rather than strict treatment
assignment. However, the sample efficiency of such estimators depends not only
on the inherent variability in outcomes but also on the varying compliance
levels of users with the instrumental variables and the choice of estimator
being used, especially when dealing with numerous instrumental variables. While
adaptive experiment design has a rich literature for direct experiments, in
this paper we take the initial steps towards enhancing sample efficiency for
indirect experiments by adaptively designing a data collection policy over
instrumental variables. Our main contribution is a practical computational
procedure that utilizes influence functions to search for an optimal data
collection policy, minimizing the mean-squared error of the desired
(non-linear) estimator. Through experiments conducted in various domains
inspired by real-world applications, we showcase how our method can
significantly improve the sample efficiency of indirect experiments.
Related papers
- Optimal Adaptive Experimental Design for Estimating Treatment Effect [14.088972921434761]
This paper addresses the fundamental question of determining the optimal accuracy in estimating the treatment effect.
By incorporating the concept of doubly robust method into sequential experimental design, we frame the optimal estimation problem as an online bandit learning problem.
Using tools and ideas from both bandit algorithm design and adaptive statistical estimation, we propose a general low switching adaptive experiment framework.
arXiv Detail & Related papers (2024-10-07T23:22:51Z) - Adaptive Experimentation When You Can't Experiment [55.86593195947978]
This paper introduces the emphconfounded pure exploration transductive linear bandit (textttCPET-LB) problem.
Online services can employ a properly randomized encouragement that incentivizes users toward a specific treatment.
arXiv Detail & Related papers (2024-06-15T20:54:48Z) - Effect Size Estimation for Duration Recommendation in Online Experiments: Leveraging Hierarchical Models and Objective Utility Approaches [13.504353263032359]
The selection of the assumed effect size (AES) critically determines the duration of an experiment, and hence its accuracy and efficiency.
Traditionally, experimenters determine AES based on domain knowledge, but this method becomes impractical for online experimentation services managing numerous experiments.
We propose two solutions for data-driven AES selection in for online experimentation services.
arXiv Detail & Related papers (2023-12-20T09:34:28Z) - Choosing a Proxy Metric from Past Experiments [54.338884612982405]
In many randomized experiments, the treatment effect of the long-term metric is often difficult or infeasible to measure.
A common alternative is to measure several short-term proxy metrics in the hope they closely track the long-term metric.
We introduce a new statistical framework to both define and construct an optimal proxy metric for use in a homogeneous population of randomized experiments.
arXiv Detail & Related papers (2023-09-14T17:43:02Z) - A Double Machine Learning Approach to Combining Experimental and Observational Data [59.29868677652324]
We propose a double machine learning approach to combine experimental and observational studies.
Our framework tests for violations of external validity and ignorability under milder assumptions.
arXiv Detail & Related papers (2023-07-04T02:53:11Z) - Task-specific experimental design for treatment effect estimation [59.879567967089145]
Large randomised trials (RCTs) are the standard for causal inference.
Recent work has proposed more sample-efficient alternatives to RCTs, but these are not adaptable to the downstream application for which the causal effect is sought.
We develop a task-specific approach to experimental design and derive sampling strategies customised to particular downstream applications.
arXiv Detail & Related papers (2023-06-08T18:10:37Z) - Design Amortization for Bayesian Optimal Experimental Design [70.13948372218849]
We build off of successful variational approaches, which optimize a parameterized variational model with respect to bounds on the expected information gain (EIG)
We present a novel neural architecture that allows experimenters to optimize a single variational model that can estimate the EIG for potentially infinitely many designs.
arXiv Detail & Related papers (2022-10-07T02:12:34Z) - GEAR: On Optimal Decision Making with Auxiliary Data [20.607673853640744]
Current optimal decision rule (ODR) methods usually require the primary outcome of interest in samples for assessing treatment effects, namely the experimental sample.
This paper is inspired to address this challenge by making use of an auxiliary sample to facilitate the estimation of ODR in the experimental sample.
We propose an auGmented inverse propensity weighted Experimental and Auxiliary sample-based decision Rule (GEAR) by maximizing the augmented inverse propensity weighted value estimator over a class of decision rules.
arXiv Detail & Related papers (2021-04-21T14:59:25Z) - Efficient Adaptive Experimental Design for Average Treatment Effect
Estimation [18.027128141189355]
We propose an algorithm for efficient experiments with estimators constructed from dependent samples.
To justify our proposed approach, we provide finite and infinite sample analyses.
arXiv Detail & Related papers (2020-02-13T02:04:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.