Best of Three Worlds: Adaptive Experimentation for Digital Marketing in
Practice
- URL: http://arxiv.org/abs/2402.10870v3
- Date: Mon, 26 Feb 2024 19:55:34 GMT
- Title: Best of Three Worlds: Adaptive Experimentation for Digital Marketing in
Practice
- Authors: Tanner Fiez, Houssam Nassif, Yu-Cheng Chen, Sergio Gamez, Lalit Jain
- Abstract summary: Adaptive experimental design (AED) methods are increasingly being used in industry as a tool to boost testing throughput or reduce experimentation cost.
This paper shares lessons learned regarding the challenges of naively using AED systems in industrial settings where non-stationarity is prevalent.
We developed an AED framework for counterfactual inference based on these experiences, and tested it in a commercial environment.
- Score: 22.231579321645878
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adaptive experimental design (AED) methods are increasingly being used in
industry as a tool to boost testing throughput or reduce experimentation cost
relative to traditional A/B/N testing methods. However, the behavior and
guarantees of such methods are not well-understood beyond idealized stationary
settings. This paper shares lessons learned regarding the challenges of naively
using AED systems in industrial settings where non-stationarity is prevalent,
while also providing perspectives on the proper objectives and system
specifications in such settings. We developed an AED framework for
counterfactual inference based on these experiences, and tested it in a
commercial environment.
Related papers
- UDA-Bench: Revisiting Common Assumptions in Unsupervised Domain Adaptation Using a Standardized Framework [59.428668614618914]
We take a deeper look into the diverse factors that influence the efficacy of modern unsupervised domain adaptation (UDA) methods.
To facilitate our analysis, we first develop UDA-Bench, a novel PyTorch framework that standardizes training and evaluation for domain adaptation.
arXiv Detail & Related papers (2024-09-23T17:57:07Z) - AExGym: Benchmarks and Environments for Adaptive Experimentation [7.948144726705323]
We present a benchmark for adaptive experimentation based on real-world datasets.
We highlight prominent practical challenges to operationalizing adaptivity: non-stationarity, batched/delayed feedback, multiple outcomes and objectives, and external validity.
arXiv Detail & Related papers (2024-08-08T15:32:12Z) - Dual Test-time Training for Out-of-distribution Recommender System [91.15209066874694]
We propose a novel Dual Test-Time-Training framework for OOD Recommendation, termed DT3OR.
In DT3OR, we incorporate a model adaptation mechanism during the test-time phase to carefully update the recommendation model.
To the best of our knowledge, this paper is the first work to address OOD recommendation via a test-time-training strategy.
arXiv Detail & Related papers (2024-07-22T13:27:51Z) - Adaptive Experimentation When You Can't Experiment [55.86593195947978]
This paper introduces the emphconfounded pure exploration transductive linear bandit (textttCPET-LB) problem.
Online services can employ a properly randomized encouragement that incentivizes users toward a specific treatment.
arXiv Detail & Related papers (2024-06-15T20:54:48Z) - Introducing User Feedback-based Counterfactual Explanations (UFCE) [49.1574468325115]
Counterfactual explanations (CEs) have emerged as a viable solution for generating comprehensible explanations in XAI.
UFCE allows for the inclusion of user constraints to determine the smallest modifications in the subset of actionable features.
UFCE outperforms two well-known CE methods in terms of textitproximity, textitsparsity, and textitfeasibility.
arXiv Detail & Related papers (2024-02-26T20:09:44Z) - Diffusion-based Visual Counterfactual Explanations -- Towards Systematic
Quantitative Evaluation [64.0476282000118]
Latest methods for visual counterfactual explanations (VCE) harness the power of deep generative models to synthesize new examples of high-dimensional images of impressive quality.
It is currently difficult to compare the performance of these VCE methods as the evaluation procedures largely vary and often boil down to visual inspection of individual examples and small scale user studies.
We propose a framework for systematic, quantitative evaluation of the VCE methods and a minimal set of metrics to be used.
arXiv Detail & Related papers (2023-08-11T12:22:37Z) - Adaptive Experimental Design and Counterfactual Inference [20.666734673282495]
This paper shares lessons learned regarding the challenges and pitfalls of naively using adaptive experimentation systems in industrial settings.
We developed an adaptive experimental design framework for counterfactual inference based on these experiences.
arXiv Detail & Related papers (2022-10-25T22:29:16Z) - Sequential Bayesian experimental designs via reinforcement learning [0.0]
We provide a new approach Sequential Experimental Design via Reinforcement Learning to construct BED in a sequential manner.
By proposing a new real-world-oriented experimental environment, our approach aims to maximize the expected information gain.
It is confirmed that our method outperforms the existing methods in various indices such as the EIG and sampling efficiency.
arXiv Detail & Related papers (2022-02-14T04:29:04Z) - Dynamic Causal Effects Evaluation in A/B Testing with a Reinforcement
Learning Framework [68.96770035057716]
A/B testing is a business strategy to compare a new product with an old one in pharmaceutical, technological, and traditional industries.
This paper introduces a reinforcement learning framework for carrying A/B testing in online experiments.
arXiv Detail & Related papers (2020-02-05T10:25:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.