Conducting A/B Experiments with a Scalable Architecture
- URL: http://arxiv.org/abs/2309.13450v1
- Date: Sat, 23 Sep 2023 18:38:28 GMT
- Title: Conducting A/B Experiments with a Scalable Architecture
- Authors: Andrew Hornback, Sungeun An, Scott Bunin, Stephen Buckley, John Kos,
Ashok Goel
- Abstract summary: A/B experiments are commonly used in research to compare the effects of changing one or more variables in two different experimental groups.
We propose a four-principle approach for developing a software architecture to support A/B experiments that is domain agnostic.
- Score: 0.6990493129893112
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A/B experiments are commonly used in research to compare the effects of
changing one or more variables in two different experimental groups - a control
group and a treatment group. While the benefits of using A/B experiments are
widely known and accepted, there is less agreement on a principled approach to
creating software infrastructure systems to assist in rapidly conducting such
experiments. We propose a four-principle approach for developing a software
architecture to support A/B experiments that is domain agnostic and can help
alleviate some of the resource constraints currently needed to successfully
implement these experiments: the software architecture (i) must retain the
typical properties of A/B experiments, (ii) capture problem solving activities
and outcomes, (iii) allow researchers to understand the behavior and outcomes
of participants in the experiment, and (iv) must enable automated analysis. We
successfully developed a software system to encapsulate these principles and
implement it in a real-world A/B experiment.
Related papers
- Adaptive Experimentation When You Can't Experiment [55.86593195947978]
This paper introduces the emphconfounded pure exploration transductive linear bandit (textttCPET-LB) problem.
Online services can employ a properly randomized encouragement that incentivizes users toward a specific treatment.
arXiv Detail & Related papers (2024-06-15T20:54:48Z) - MLXP: A Framework for Conducting Replicable Experiments in Python [63.37350735954699]
We propose MLXP, an open-source, simple, and lightweight experiment management tool based on Python.
It streamlines the experimental process with minimal overhead while ensuring a high level of practitioner overhead.
arXiv Detail & Related papers (2024-02-21T14:22:20Z) - Adaptive Instrument Design for Indirect Experiments [48.815194906471405]
Unlike RCTs, indirect experiments estimate treatment effects by leveragingconditional instrumental variables.
In this paper we take the initial steps towards enhancing sample efficiency for indirect experiments by adaptively designing a data collection policy.
Our main contribution is a practical computational procedure that utilizes influence functions to search for an optimal data collection policy.
arXiv Detail & Related papers (2023-12-05T02:38:04Z) - Opportunities for Adaptive Experiments to Enable Continuous Improvement in Computer Science Education [7.50867730317249]
In adaptive experiments, data is analyzed and utilized as different conditions are deployed to students.
These algorithms can then dynamically deploy the most effective conditions in subsequent interactions with students.
This work paves the way for exploring the importance of adaptive experiments in bridging research and practice to achieve continuous improvement.
arXiv Detail & Related papers (2023-10-18T20:54:59Z) - PyExperimenter: Easily distribute experiments and track results [63.871474825689134]
PyExperimenter is a tool to facilitate the setup, documentation, execution, and subsequent evaluation of results from an empirical study of algorithms.
It is intended to be used by researchers in the field of artificial intelligence, but is not limited to those.
arXiv Detail & Related papers (2023-01-16T10:43:02Z) - Adaptive Experimental Design and Counterfactual Inference [20.666734673282495]
This paper shares lessons learned regarding the challenges and pitfalls of naively using adaptive experimentation systems in industrial settings.
We developed an adaptive experimental design framework for counterfactual inference based on these experiences.
arXiv Detail & Related papers (2022-10-25T22:29:16Z) - Experiments as Code: A Concept for Reproducible, Auditable, Debuggable,
Reusable, & Scalable Experiments [7.557948558412152]
A common concern in experimental research is the auditability and of experiments.
We propose the "Experiments as Code" paradigm, where the whole experiment is not only documented but additionally the automation code is provided.
arXiv Detail & Related papers (2022-02-24T12:15:00Z) - Sequential Bayesian experimental designs via reinforcement learning [0.0]
We provide a new approach Sequential Experimental Design via Reinforcement Learning to construct BED in a sequential manner.
By proposing a new real-world-oriented experimental environment, our approach aims to maximize the expected information gain.
It is confirmed that our method outperforms the existing methods in various indices such as the EIG and sampling efficiency.
arXiv Detail & Related papers (2022-02-14T04:29:04Z) - Reinforcement Learning based Sequential Batch-sampling for Bayesian
Optimal Experimental Design [1.6249267147413522]
Sequential design of experiments (SDOE) is a popular suite of methods, that has yielded promising results in recent years.
In this work, we aim to extend the SDOE strategy, to query the experiment or computer code at a batch of inputs.
A unique capability of the proposed methodology is its ability to be applied to multiple tasks, for example optimization of a function, once its trained.
arXiv Detail & Related papers (2021-12-21T02:25:23Z) - On Inductive Biases for Heterogeneous Treatment Effect Estimation [91.3755431537592]
We investigate how to exploit structural similarities of an individual's potential outcomes (POs) under different treatments.
We compare three end-to-end learning strategies to overcome this problem.
arXiv Detail & Related papers (2021-06-07T16:30:46Z) - Dynamic Causal Effects Evaluation in A/B Testing with a Reinforcement
Learning Framework [68.96770035057716]
A/B testing is a business strategy to compare a new product with an old one in pharmaceutical, technological, and traditional industries.
This paper introduces a reinforcement learning framework for carrying A/B testing in online experiments.
arXiv Detail & Related papers (2020-02-05T10:25:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.