Fair Effect Attribution in Parallel Online Experiments
- URL: http://arxiv.org/abs/2210.08338v1
- Date: Sat, 15 Oct 2022 17:15:51 GMT
- Title: Fair Effect Attribution in Parallel Online Experiments
- Authors: Alexander Buchholz, Vito Bellini, Giuseppe Di Benedetto, Yannik Stein,
Matteo Ruffini, Fabian Moerchen
- Abstract summary: A/B tests serve the purpose of reliably identifying the effect of changes introduced in online services.
It is common for online platforms to run a large number of simultaneous experiments by splitting incoming user traffic randomly.
Despite a perfect randomization between different groups, simultaneous experiments can interact with each other and create a negative impact on average population outcomes.
- Score: 57.13281584606437
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A/B tests serve the purpose of reliably identifying the effect of changes
introduced in online services. It is common for online platforms to run a large
number of simultaneous experiments by splitting incoming user traffic randomly
in treatment and control groups. Despite a perfect randomization between
different groups, simultaneous experiments can interact with each other and
create a negative impact on average population outcomes such as engagement
metrics. These are measured globally and monitored to protect overall user
experience. Therefore, it is crucial to measure these interaction effects and
attribute their overall impact in a fair way to the respective experimenters.
We suggest an approach to measure and disentangle the effect of simultaneous
experiments by providing a cost sharing approach based on Shapley values. We
also provide a counterfactual perspective, that predicts shared impact based on
conditional average treatment effects making use of causal inference
techniques. We illustrate our approach in real world and synthetic data
experiments.
Related papers
- A Simple Model to Estimate Sharing Effects in Social Networks [3.988614978933934]
We propose a simple Markov Decision Process (MDP)-based model describing user sharing behaviour in social networks.
We derive an unbiased estimator for treatment effects under this model, and demonstrate through reproducible synthetic experiments that it outperforms existing methods by a significant margin.
arXiv Detail & Related papers (2024-09-16T13:32:36Z) - Causal Message Passing for Experiments with Unknown and General Network Interference [5.294604210205507]
We introduce a new framework to accommodate complex and unknown network interference.
Our framework, termed causal message-passing, is grounded in high-dimensional approximate message passing methodology.
We demonstrate the effectiveness of this approach across five numerical scenarios.
arXiv Detail & Related papers (2023-11-14T17:31:50Z) - Clustering-based Imputation for Dropout Buyers in Large-scale Online
Experimentation [4.753069295451989]
In online experimentation, appropriate metrics (e.g., purchase) provide strong evidence to support hypotheses and enhance the decision-making process.
In this work, we introduce the concept of dropout buyers and categorize users with incomplete metric values into two groups: visitors and dropout buyers.
For the analysis of incomplete metrics, we propose a clustering-based imputation method using $k$-nearest neighbors.
arXiv Detail & Related papers (2022-09-09T01:05:53Z) - Cascaded Debiasing: Studying the Cumulative Effect of Multiple
Fairness-Enhancing Interventions [48.98659895355356]
This paper investigates the cumulative effect of multiple fairness enhancing interventions at different stages of the machine learning (ML) pipeline.
Applying multiple interventions results in better fairness and lower utility than individual interventions on aggregate.
On the downside, fairness-enhancing interventions can negatively impact different population groups, especially the privileged group.
arXiv Detail & Related papers (2022-02-08T09:20:58Z) - Causal Inference Struggles with Agency on Online Platforms [32.81856583026165]
We conduct four large-scale within-study comparisons on Twitter aimed at assessing the effectiveness of observational studies derived from user self-selection.
Our results suggest that observational studies derived from user self-selection are a poor alternative to randomized experimentation on online platforms.
arXiv Detail & Related papers (2021-07-19T16:14:00Z) - On Inductive Biases for Heterogeneous Treatment Effect Estimation [91.3755431537592]
We investigate how to exploit structural similarities of an individual's potential outcomes (POs) under different treatments.
We compare three end-to-end learning strategies to overcome this problem.
arXiv Detail & Related papers (2021-06-07T16:30:46Z) - Policy design in experiments with unknown interference [0.0]
We study estimation and inference on policies with spillover effects.
Units are organized into a finite number of large clusters.
We provide strong theoretical guarantees and an implementation in a large-scale field experiment.
arXiv Detail & Related papers (2020-11-16T18:58:54Z) - Enabling Counterfactual Survival Analysis with Balanced Representations [64.17342727357618]
Survival data are frequently encountered across diverse medical applications, i.e., drug development, risk profiling, and clinical trials.
We propose a theoretically grounded unified framework for counterfactual inference applicable to survival outcomes.
arXiv Detail & Related papers (2020-06-14T01:15:00Z) - Almost-Matching-Exactly for Treatment Effect Estimation under Network
Interference [73.23326654892963]
We propose a matching method that recovers direct treatment effects from randomized experiments where units are connected in an observed network.
Our method matches units almost exactly on counts of unique subgraphs within their neighborhood graphs.
arXiv Detail & Related papers (2020-03-02T15:21:20Z) - Dynamic Causal Effects Evaluation in A/B Testing with a Reinforcement
Learning Framework [68.96770035057716]
A/B testing is a business strategy to compare a new product with an old one in pharmaceutical, technological, and traditional industries.
This paper introduces a reinforcement learning framework for carrying A/B testing in online experiments.
arXiv Detail & Related papers (2020-02-05T10:25:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.