Adaptive Experimental Design and Counterfactual Inference
- URL: http://arxiv.org/abs/2210.14369v1
- Date: Tue, 25 Oct 2022 22:29:16 GMT
- Title: Adaptive Experimental Design and Counterfactual Inference
- Authors: Tanner Fiez, Sergio Gamez, Arick Chen, Houssam Nassif, Lalit Jain
- Abstract summary: This paper shares lessons learned regarding the challenges and pitfalls of naively using adaptive experimentation systems in industrial settings.
We developed an adaptive experimental design framework for counterfactual inference based on these experiences.
- Score: 20.666734673282495
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Adaptive experimental design methods are increasingly being used in industry
as a tool to boost testing throughput or reduce experimentation cost relative
to traditional A/B/N testing methods. This paper shares lessons learned
regarding the challenges and pitfalls of naively using adaptive experimentation
systems in industrial settings where non-stationarity is prevalent, while also
providing perspectives on the proper objectives and system specifications in
these settings. We developed an adaptive experimental design framework for
counterfactual inference based on these experiences, and tested it in a
commercial environment.
Related papers
- AExGym: Benchmarks and Environments for Adaptive Experimentation [7.948144726705323]
We present a benchmark for adaptive experimentation based on real-world datasets.
We highlight prominent practical challenges to operationalizing adaptivity: non-stationarity, batched/delayed feedback, multiple outcomes and objectives, and external validity.
arXiv Detail & Related papers (2024-08-08T15:32:12Z) - Dual Test-time Training for Out-of-distribution Recommender System [91.15209066874694]
We propose a novel Dual Test-Time-Training framework for OOD Recommendation, termed DT3OR.
In DT3OR, we incorporate a model adaptation mechanism during the test-time phase to carefully update the recommendation model.
To the best of our knowledge, this paper is the first work to address OOD recommendation via a test-time-training strategy.
arXiv Detail & Related papers (2024-07-22T13:27:51Z) - Adaptive Experimentation When You Can't Experiment [55.86593195947978]
This paper introduces the emphconfounded pure exploration transductive linear bandit (textttCPET-LB) problem.
Online services can employ a properly randomized encouragement that incentivizes users toward a specific treatment.
arXiv Detail & Related papers (2024-06-15T20:54:48Z) - Best of Three Worlds: Adaptive Experimentation for Digital Marketing in
Practice [22.231579321645878]
Adaptive experimental design (AED) methods are increasingly being used in industry as a tool to boost testing throughput or reduce experimentation cost.
This paper shares lessons learned regarding the challenges of naively using AED systems in industrial settings where non-stationarity is prevalent.
We developed an AED framework for counterfactual inference based on these experiences, and tested it in a commercial environment.
arXiv Detail & Related papers (2024-02-16T18:13:35Z) - Adaptive Instrument Design for Indirect Experiments [48.815194906471405]
Unlike RCTs, indirect experiments estimate treatment effects by leveragingconditional instrumental variables.
In this paper we take the initial steps towards enhancing sample efficiency for indirect experiments by adaptively designing a data collection policy.
Our main contribution is a practical computational procedure that utilizes influence functions to search for an optimal data collection policy.
arXiv Detail & Related papers (2023-12-05T02:38:04Z) - Opportunities for Adaptive Experiments to Enable Continuous Improvement in Computer Science Education [7.50867730317249]
In adaptive experiments, data is analyzed and utilized as different conditions are deployed to students.
These algorithms can then dynamically deploy the most effective conditions in subsequent interactions with students.
This work paves the way for exploring the importance of adaptive experiments in bridging research and practice to achieve continuous improvement.
arXiv Detail & Related papers (2023-10-18T20:54:59Z) - Task-specific experimental design for treatment effect estimation [59.879567967089145]
Large randomised trials (RCTs) are the standard for causal inference.
Recent work has proposed more sample-efficient alternatives to RCTs, but these are not adaptable to the downstream application for which the causal effect is sought.
We develop a task-specific approach to experimental design and derive sampling strategies customised to particular downstream applications.
arXiv Detail & Related papers (2023-06-08T18:10:37Z) - Design Amortization for Bayesian Optimal Experimental Design [70.13948372218849]
We build off of successful variational approaches, which optimize a parameterized variational model with respect to bounds on the expected information gain (EIG)
We present a novel neural architecture that allows experimenters to optimize a single variational model that can estimate the EIG for potentially infinitely many designs.
arXiv Detail & Related papers (2022-10-07T02:12:34Z) - Sequential Bayesian experimental designs via reinforcement learning [0.0]
We provide a new approach Sequential Experimental Design via Reinforcement Learning to construct BED in a sequential manner.
By proposing a new real-world-oriented experimental environment, our approach aims to maximize the expected information gain.
It is confirmed that our method outperforms the existing methods in various indices such as the EIG and sampling efficiency.
arXiv Detail & Related papers (2022-02-14T04:29:04Z) - Implicit Deep Adaptive Design: Policy-Based Experimental Design without
Likelihoods [24.50829695870901]
implicit Deep Adaptive Design (iDAD) is a new method for performing adaptive experiments in real-time with implicit models.
iDAD amortizes the cost of Bayesian optimal experimental design (BOED) by learning a design policy network upfront.
arXiv Detail & Related papers (2021-11-03T16:24:05Z) - Dynamic Causal Effects Evaluation in A/B Testing with a Reinforcement
Learning Framework [68.96770035057716]
A/B testing is a business strategy to compare a new product with an old one in pharmaceutical, technological, and traditional industries.
This paper introduces a reinforcement learning framework for carrying A/B testing in online experiments.
arXiv Detail & Related papers (2020-02-05T10:25:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.