Disincentivizing Polarization in Social Networks
- URL: http://arxiv.org/abs/2305.14537v1
- Date: Tue, 23 May 2023 21:47:31 GMT
- Title: Disincentivizing Polarization in Social Networks
- Authors: Christian Borgs, Jennifer Chayes, Christian Ikeokwu, Ellen Vitercik
- Abstract summary: We present a model for content curation and personalization that avoids filter bubbles.
We provide algorithmic guarantees for optimizing recommendations.
Using real-world preference data, we verify that under our model, users share the burden of diversification with only minor utility loss.
- Score: 10.758115514959593
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: On social networks, algorithmic personalization drives users into filter
bubbles where they rarely see content that deviates from their interests. We
present a model for content curation and personalization that avoids filter
bubbles, along with algorithmic guarantees and nearly matching lower bounds. In
our model, the platform interacts with $n$ users over $T$ timesteps, choosing
content for each user from $k$ categories. The platform receives stochastic
rewards as in a multi-arm bandit. To avoid filter bubbles, we draw on the
intuition that if some users are shown some category of content, then all users
should see at least a small amount of that content. We first analyze a naive
formalization of this intuition and show it has unintended consequences: it
leads to ``tyranny of the majority'' with the burden of diversification borne
disproportionately by those with minority interests. This leads us to our model
which distributes this burden more equitably. We require that the probability
any user is shown a particular type of content is at least $\gamma$ times the
average probability all users are shown that type of content. Full
personalization corresponds to $\gamma = 0$ and complete homogenization
corresponds to $\gamma = 1$; hence, $\gamma$ encodes a hard cap on the level of
personalization. We also analyze additional formulations where the platform can
exceed its cap but pays a penalty proportional to its constraint violation. We
provide algorithmic guarantees for optimizing recommendations subject to these
constraints. These include nearly matching upper and lower bounds for the
entire range of $\gamma \in [0,1]$ showing that the reward of a multi-agent
variant of UCB is nearly optimal. Using real-world preference data, we
empirically verify that under our model, users share the burden of
diversification with only minor utility loss under our constraints.
Related papers
- Privacy Amplification via Shuffling: Unified, Simplified, and Tightened [20.10078781197001]
We propose a comprehensive framework for privacy amplification in both single-message and multi-message shuffle protocols.
Our theoretical results demonstrate that our framework provides tighter bounds, especially for local randomizers with extremal probability design.
Our bounds also result in a remarkably efficient $tildeO(n)$ algorithm that numerically amplifies privacy in less than $10$ seconds for $n=108$ users.
arXiv Detail & Related papers (2023-04-11T06:27:25Z) - (Private) Kernelized Bandits with Distributed Biased Feedback [13.312928989951505]
We study kernelized bandits with distributed biased feedback.
New emphdistributed phase-then-batch-based elimination (textttDPBE) algorithm is proposed.
We show that textttDPBE achieves a sublinear regret of $tildeO(T1-alpha/2+sqrtgamma_T T)$, where $alphain (0,1)$ is the user-sampling parameter one can tune.
arXiv Detail & Related papers (2023-01-28T02:30:15Z) - Modeling Attrition in Recommender Systems with Departing Bandits [84.85560764274399]
We propose a novel multi-armed bandit setup that captures policy-dependent horizons.
We first address the case where all users share the same type, demonstrating that a recent UCB-based algorithm is optimal.
We then move forward to the more challenging case, where users are divided among two types.
arXiv Detail & Related papers (2022-03-25T02:30:54Z) - Coordinated Attacks against Contextual Bandits: Fundamental Limits and
Defense Mechanisms [75.17357040707347]
Motivated by online recommendation systems, we propose the problem of finding the optimal policy in contextual bandits.
The goal is to robustly learn the policy that maximizes rewards for good users with as few user interactions as possible.
We show we can achieve an $tildeO(min(S,A)cdot alpha/epsilon2)$ upper-bound, by employing efficient robust mean estimators.
arXiv Detail & Related papers (2022-01-30T01:45:13Z) - Linear Speedup in Personalized Collaborative Learning [69.45124829480106]
Personalization in federated learning can improve the accuracy of a model for a user by trading off the model's bias.
We formalize the personalized collaborative learning problem as optimization of a user's objective.
We explore conditions under which we can optimally trade-off their bias for a reduction in variance.
arXiv Detail & Related papers (2021-11-10T22:12:52Z) - Tight and Robust Private Mean Estimation with Few Users [16.22135057266913]
We study high-dimensional mean estimation under user-level differential privacy.
We design an $(eps,delta)$-differentially private mechanism using as few users as possible.
arXiv Detail & Related papers (2021-10-22T16:02:21Z) - User-Level Private Learning via Correlated Sampling [49.453751858361265]
We consider the setting where each user holds $m$ samples and the privacy protection is enforced at the level of each user's data.
We show that, in this setting, we may learn with a much fewer number of users.
arXiv Detail & Related papers (2021-10-21T15:33:53Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z) - Learning discrete distributions: user vs item-level privacy [47.05234904407048]
Recently many practical applications such as federated learning require preserving privacy for all items of a single user.
We study the fundamental problem of learning discrete distributions over $k$ symbols with user-level differential privacy.
We propose a mechanism such that the number of users scales as $tildemathcalO(k/(malpha2) + k/sqrtmepsilonalpha)$ and hence the privacy penalty is $tildeTheta(sqrtm)$ times smaller compared to the standard mechanisms.
arXiv Detail & Related papers (2020-07-27T16:15:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.