Modeling Content Creator Incentives on Algorithm-Curated Platforms
- URL: http://arxiv.org/abs/2206.13102v2
- Date: Thu, 6 Jul 2023 07:24:25 GMT
- Title: Modeling Content Creator Incentives on Algorithm-Curated Platforms
- Authors: Jiri Hron, Karl Krauth, Michael I. Jordan, Niki Kilbertus, Sarah Dean
- Abstract summary: We study how algorithmic choices affect the existence and character of (Nash) equilibria in exposure games.
We propose tools for numerically finding equilibria in exposure games, and illustrate results of an audit on the MovieLens and LastFM datasets.
- Score: 76.53541575455978
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Content creators compete for user attention. Their reach crucially depends on
algorithmic choices made by developers on online platforms. To maximize
exposure, many creators adapt strategically, as evidenced by examples like the
sprawling search engine optimization industry. This begets competition for the
finite user attention pool. We formalize these dynamics in what we call an
exposure game, a model of incentives induced by algorithms, including modern
factorization and (deep) two-tower architectures. We prove that seemingly
innocuous algorithmic choices, e.g., non-negative vs. unconstrained
factorization, significantly affect the existence and character of (Nash)
equilibria in exposure games. We proffer use of creator behavior models, like
exposure games, for an (ex-ante) pre-deployment audit. Such an audit can
identify misalignment between desirable and incentivized content, and thus
complement post-hoc measures like content filtering and moderation. To this
end, we propose tools for numerically finding equilibria in exposure games, and
illustrate results of an audit on the MovieLens and LastFM datasets. Among
else, we find that the strategically produced content exhibits strong
dependence between algorithmic exploration and content diversity, and between
model expressivity and bias towards gender-based user and creator groups.
Related papers
- Measuring Strategization in Recommendation: Users Adapt Their Behavior to Shape Future Content [66.71102704873185]
We test for user strategization by conducting a lab experiment and survey.
We find strong evidence of strategization across outcome metrics, including participants' dwell time and use of "likes"
Our findings suggest that platforms cannot ignore the effect of their algorithms on user behavior.
arXiv Detail & Related papers (2024-05-09T07:36:08Z) - Can Probabilistic Feedback Drive User Impacts in Online Platforms? [26.052963782865294]
A common explanation for negative user impacts of content recommender systems is misalignment between the platform's objective and user welfare.
In this work, we show that misalignment in the platform's objective is not the only potential cause of unintended impacts on users.
The source of these user impacts is that different pieces of content may generate observable user reactions (feedback information) at different rates.
arXiv Detail & Related papers (2024-01-10T18:12:31Z) - Matching of Users and Creators in Two-Sided Markets with Departures [0.6649753747542209]
We propose a model of content recommendation that focuses on the dynamics of user-content matching.
We show that a user-centric greedy algorithm that does not consider creator departures can result in arbitrarily poor total engagement.
We present two practical algorithms, one with performance guarantees under mild assumptions on user preferences, and another that tends to outperform algorithms that ignore two-sided departures in practice.
arXiv Detail & Related papers (2023-12-30T20:13:28Z) - Incentivizing High-Quality Content in Online Recommender Systems [80.19930280144123]
We study the game between producers and analyze the content created at equilibrium.
We show that standard online learning algorithms, such as Hedge and EXP3, unfortunately incentivize producers to create low-quality content.
arXiv Detail & Related papers (2023-06-13T00:55:10Z) - How Bad is Top-$K$ Recommendation under Competing Content Creators? [43.2268992294178]
We study the user welfare guarantee through the lens of Price of Anarchy.
We show that the fraction of user welfare loss due to creator competition is always upper bounded by a small constant depending on $K$ and randomness in user decisions.
arXiv Detail & Related papers (2023-02-03T19:37:35Z) - Mathematical Framework for Online Social Media Auditing [5.384630221560811]
Social media platforms (SMPs) leverage algorithmic filtering (AF) as a means of selecting the content that constitutes a user's feed with the aim of maximizing their rewards.
Selectively choosing the contents to be shown on the user's feed may yield a certain extent of influence, either minor or major, on the user's decision-making.
We mathematically formalize this framework and utilize it to construct a data-driven statistical auditing procedure to regulate AF from deflecting users' beliefs over time, along with sample complexity guarantees.
arXiv Detail & Related papers (2022-09-12T19:04:14Z) - Incentivizing Combinatorial Bandit Exploration [87.08827496301839]
Consider a bandit algorithm that recommends actions to self-interested users in a recommendation system.
Users are free to choose other actions and need to be incentivized to follow the algorithm's recommendations.
While the users prefer to exploit, the algorithm can incentivize them to explore by leveraging the information collected from the previous users.
arXiv Detail & Related papers (2022-06-01T13:46:25Z) - Perceptual Score: What Data Modalities Does Your Model Perceive? [73.75255606437808]
We introduce the perceptual score, a metric that assesses the degree to which a model relies on the different subsets of the input features.
We find that recent, more accurate multi-modal models for visual question-answering tend to perceive the visual data less than their predecessors.
Using the perceptual score also helps to analyze model biases by decomposing the score into data subset contributions.
arXiv Detail & Related papers (2021-10-27T12:19:56Z) - Unsupervised Belief Representation Learning in Polarized Networks with
Information-Theoretic Variational Graph Auto-Encoders [26.640917190618612]
We develop an unsupervised algorithm for belief representation learning in polarized networks.
It learns to project both users and content items (e.g., posts that represent user views) into an appropriate disentangled latent space.
The latent representation of users and content can then be used to quantify their ideological leaning and detect/predict their stances on issues.
arXiv Detail & Related papers (2021-10-01T04:35:01Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.