Automating Pipelines of A/B Tests with Population Split Using
Self-Adaptation and Machine Learning
- URL: http://arxiv.org/abs/2306.01407v2
- Date: Mon, 14 Aug 2023 06:51:19 GMT
- Title: Automating Pipelines of A/B Tests with Population Split Using
Self-Adaptation and Machine Learning
- Authors: Federico Quin, Danny Weyns
- Abstract summary: A/B testing is a common approach used in industry to facilitate innovation through the introduction of new features or the modification of existing software.
Traditionally, A/B tests are conducted sequentially, with each experiment targeting the entire population of the corresponding application.
To tackle these problems, we introduce a new self-adaptive approach called AutoPABS, that automates the execution of pipelines of A/B tests.
- Score: 10.635137352476246
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A/B testing is a common approach used in industry to facilitate innovation
through the introduction of new features or the modification of existing
software. Traditionally, A/B tests are conducted sequentially, with each
experiment targeting the entire population of the corresponding application.
This approach can be time-consuming and costly, particularly when the
experiments are not relevant to the entire population. To tackle these
problems, we introduce a new self-adaptive approach called AutoPABS, short for
Automated Pipelines of A/B tests using Self-adaptation, that (1) automates the
execution of pipelines of A/B tests, and (2) supports a split of the population
in the pipeline to divide the population into multiple A/B tests according to
user-based criteria, leveraging machine learning. We started the evaluation
with a small survey to probe the appraisal of the notation and infrastructure
of AutoPABS. Then we performed a series of tests to measure the gains obtained
by applying a population split in an automated A/B testing pipeline, using an
extension of the SEAByTE artifact. The survey results show that the
participants express the usefulness of automating A/B testing pipelines and
population split. The tests show that automatically executing pipelines of A/B
tests with a population split accelerates the identification of statistically
significant results of the parallel executed experiments of A/B tests compared
to a traditional approach that performs the experiments sequentially.
Related papers
- Automatic benchmarking of large multimodal models via iterative experiment programming [71.78089106671581]
We present APEx, the first framework for automatic benchmarking of LMMs.
Given a research question expressed in natural language, APEx leverages a large language model (LLM) and a library of pre-specified tools to generate a set of experiments for the model at hand.
The report drives the testing procedure: based on the current status of the investigation, APEx chooses which experiments to perform and whether the results are sufficient to draw conclusions.
arXiv Detail & Related papers (2024-06-18T06:43:46Z) - Deep anytime-valid hypothesis testing [29.273915933729057]
We propose a general framework for constructing powerful, sequential hypothesis tests for nonparametric testing problems.
We develop a principled approach of leveraging the representation capability of machine learning models within the testing-by-betting framework.
Empirical results on synthetic and real-world datasets demonstrate that tests instantiated using our general framework are competitive against specialized baselines.
arXiv Detail & Related papers (2023-10-30T09:46:19Z) - A/B Testing: A Systematic Literature Review [10.222047656342493]
Single classic A/B tests are the dominating type of tests.
The dominating use of the test results are feature selection, feature rollout, and continued feature development.
The main reported open problems are enhancement of proposed approaches and their usability.
arXiv Detail & Related papers (2023-08-09T12:55:51Z) - Validation of massively-parallel adaptive testing using dynamic control
matching [0.0]
Modern businesses often run many A/B/n tests at the same time and in parallel, and package many content variations into the same messages.
This paper presents a method for disentangling the causal effects of the various tests under conditions of continuous test adaptation.
arXiv Detail & Related papers (2023-05-02T11:28:12Z) - A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts [143.14128737978342]
Test-time adaptation, an emerging paradigm, has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions.
Recent progress in this paradigm highlights the significant benefits of utilizing unlabeled data for training self-adapted models prior to inference.
arXiv Detail & Related papers (2023-03-27T16:32:21Z) - Robust Test-Time Adaptation in Dynamic Scenarios [9.475271284789969]
Test-time adaptation (TTA) intends to adapt the pretrained model to test distributions with only unlabeled test data streams.
We elaborate a Robust Test-Time Adaptation (RoTTA) method against the complex data stream in PTTA.
Our method is easy to implement, making it a good choice for rapid deployment.
arXiv Detail & Related papers (2023-03-24T10:19:14Z) - Robust Continual Test-time Adaptation: Instance-aware BN and
Prediction-balanced Memory [58.72445309519892]
We present a new test-time adaptation scheme that is robust against non-i.i.d. test data streams.
Our novelty is mainly two-fold: (a) Instance-Aware Batch Normalization (IABN) that corrects normalization for out-of-distribution samples, and (b) Prediction-balanced Reservoir Sampling (PBRS) that simulates i.i.d. data stream from non-i.i.d. stream in a class-balanced manner.
arXiv Detail & Related papers (2022-08-10T03:05:46Z) - AutoML Two-Sample Test [13.468660785510945]
We use a simple test that takes the mean discrepancy of a witness function as the test statistic and prove that minimizing a squared loss leads to a witness with optimal testing power.
We provide an implementation of the AutoML two-sample test in the Python package autotst.
arXiv Detail & Related papers (2022-06-17T15:41:07Z) - TTAPS: Test-Time Adaption by Aligning Prototypes using Self-Supervision [70.05605071885914]
We propose a novel modification of the self-supervised training algorithm SwAV that adds the ability to adapt to single test samples.
We show the success of our method on the common benchmark dataset CIFAR10-C.
arXiv Detail & Related papers (2022-05-18T05:43:06Z) - Efficient Test-Time Model Adaptation without Forgetting [60.36499845014649]
Test-time adaptation seeks to tackle potential distribution shifts between training and testing data.
We propose an active sample selection criterion to identify reliable and non-redundant samples.
We also introduce a Fisher regularizer to constrain important model parameters from drastic changes.
arXiv Detail & Related papers (2022-04-06T06:39:40Z) - Noisy Adaptive Group Testing using Bayesian Sequential Experimental
Design [63.48989885374238]
When the infection prevalence of a disease is low, Dorfman showed 80 years ago that testing groups of people can prove more efficient than testing people individually.
Our goal in this paper is to propose new group testing algorithms that can operate in a noisy setting.
arXiv Detail & Related papers (2020-04-26T23:41:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.