Differences-in-Neighbors for Network Interference in Experiments
- URL: http://arxiv.org/abs/2503.02271v1
- Date: Tue, 04 Mar 2025 04:40:12 GMT
- Title: Differences-in-Neighbors for Network Interference in Experiments
- Authors: Tianyi Peng, Naimeng Ye, Andrew Zheng,
- Abstract summary: We propose a new estimator, dubbed Differences-in-Neighbors (DN), designed explicitly to mitigate network interference.<n>Compared to DM estimators, DN bias second order in the magnitude of the interference effect, while its variance is exponentially smaller than that of HT estimators.<n> Empirical evaluations on a large-scale social network and a city-level ride-sharing simulator demonstrate DN's superior performance.
- Score: 5.079602839359523
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Experiments in online platforms frequently suffer from network interference, in which a treatment applied to a given unit affects outcomes for other units connected via the platform. This SUTVA violation biases naive approaches to experiment design and estimation. A common solution is to reduce interference by clustering connected units, and randomizing treatments at the cluster level, typically followed by estimation using one of two extremes: either a simple difference-in-means (DM) estimator, which ignores remaining interference; or an unbiased Horvitz-Thompson (HT) estimator, which eliminates interference at great cost in variance. Even combined with clustered designs, this presents a limited set of achievable bias variance tradeoffs. We propose a new estimator, dubbed Differences-in-Neighbors (DN), designed explicitly to mitigate network interference. Compared to DM estimators, DN achieves bias second order in the magnitude of the interference effect, while its variance is exponentially smaller than that of HT estimators. When combined with clustered designs, DN offers improved bias-variance tradeoffs not achievable by existing approaches. Empirical evaluations on a large-scale social network and a city-level ride-sharing simulator demonstrate the superior performance of DN in experiments at practical scale.
Related papers
- Online Experimental Design With Estimation-Regret Trade-off Under Network Interference [7.080131271060764]
We introduce a unified interference-aware framework for online experimental design.<n>Compared to existing studies, we extend the definition of arm space by utilizing the statistical concept of exposure mapping.<n>We also propose an algorithmic implementation and discuss its generalization across different learning settings and network topology.
arXiv Detail & Related papers (2024-12-04T21:45:35Z) - Kolmogorov-Smirnov GAN [52.36633001046723]
We propose a novel deep generative model, the Kolmogorov-Smirnov Generative Adversarial Network (KSGAN)
Unlike existing approaches, KSGAN formulates the learning process as a minimization of the Kolmogorov-Smirnov (KS) distance.
arXiv Detail & Related papers (2024-06-28T14:30:14Z) - Estimating Treatment Effects under Recommender Interference: A Structured Neural Networks Approach [13.208141830901845]
We show that the standard difference-in-means estimator can lead to biased estimates due to recommender interference.
We propose a "recommender choice model" that describes which item gets exposed from a pool containing both treated and control items.
We show that the proposed estimator yields results comparable to the benchmark, whereas the standard difference-in-means estimator can exhibit significant bias and even produce reversed signs.
arXiv Detail & Related papers (2024-06-20T14:53:26Z) - Correcting for Interference in Experiments: A Case Study at Douyin [9.586075896428177]
Interference is a ubiquitous problem in experiments conducted on two-sided content marketplaces, such as Douyin (China's analog of TikTok)
We introduce a novel Monte-Carlo estimator, based on "Differences-in-Qs" (DQ) techniques, which achieves bias that is second-order in the treatment effect, while remaining sample-efficient to estimate.
We implement our estimator on Douyin's experimentation platform, and in the process develop DQ into a truly "plug-and-play" estimator for interference in real-world settings.
arXiv Detail & Related papers (2023-05-04T04:30:30Z) - Neighborhood Adaptive Estimators for Causal Inference under Network Interference [109.17155002599978]
We consider the violation of the classical no-interference assumption with units connected by a network.<n>For tractability, we consider a known network that describes how interference may spread.
arXiv Detail & Related papers (2022-12-07T14:53:47Z) - A Unified Framework for Multi-distribution Density Ratio Estimation [101.67420298343512]
Binary density ratio estimation (DRE) provides the foundation for many state-of-the-art machine learning algorithms.
We develop a general framework from the perspective of Bregman minimization divergence.
We show that our framework leads to methods that strictly generalize their counterparts in binary DRE.
arXiv Detail & Related papers (2021-12-07T01:23:20Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Two-Stage TMLE to Reduce Bias and Improve Efficiency in Cluster
Randomized Trials [0.0]
Cluster randomized trials (CRTs) randomly assign an intervention to groups of individuals, and measure outcomes on individuals in those groups.
Findings are often missing for some individuals within clusters.
CRTs often randomize limited numbers of clusters, resulting in chance imbalances on baseline outcome predictors between arms.
arXiv Detail & Related papers (2021-06-29T21:47:30Z) - Decentralized Local Stochastic Extra-Gradient for Variational
Inequalities [125.62877849447729]
We consider distributed variational inequalities (VIs) on domains with the problem data that is heterogeneous (non-IID) and distributed across many devices.
We make a very general assumption on the computational network that covers the settings of fully decentralized calculations.
We theoretically analyze its convergence rate in the strongly-monotone, monotone, and non-monotone settings.
arXiv Detail & Related papers (2021-06-15T17:45:51Z) - Minimizing Interference and Selection Bias in Network Experiment Design [14.696233190562939]
We propose a principled framework for network experiment design which jointly minimizes interference and selection bias.
Our experiments on a number of real-world datasets show that our proposed framework leads to significantly lower error in causal effect estimation.
arXiv Detail & Related papers (2020-04-15T17:34:13Z) - Detached Error Feedback for Distributed SGD with Random Sparsification [98.98236187442258]
Communication bottleneck has been a critical problem in large-scale deep learning.
We propose a new distributed error feedback (DEF) algorithm, which shows better convergence than error feedback for non-efficient distributed problems.
We also propose DEFA to accelerate the generalization of DEF, which shows better bounds than DEF.
arXiv Detail & Related papers (2020-04-11T03:50:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.