How Bad is Top-$K$ Recommendation under Competing Content Creators?
- URL: http://arxiv.org/abs/2302.01971v2
- Date: Tue, 2 May 2023 19:53:22 GMT
- Title: How Bad is Top-$K$ Recommendation under Competing Content Creators?
- Authors: Fan Yao, Chuanhao Li, Denis Nekipelov, Hongning Wang, Haifeng Xu
- Abstract summary: We study the user welfare guarantee through the lens of Price of Anarchy.
We show that the fraction of user welfare loss due to creator competition is always upper bounded by a small constant depending on $K$ and randomness in user decisions.
- Score: 43.2268992294178
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Content creators compete for exposure on recommendation platforms, and such
strategic behavior leads to a dynamic shift over the content distribution.
However, how the creators' competition impacts user welfare and how the
relevance-driven recommendation influences the dynamics in the long run are
still largely unknown.
This work provides theoretical insights into these research questions. We
model the creators' competition under the assumptions that: 1) the platform
employs an innocuous top-$K$ recommendation policy; 2) user decisions follow
the Random Utility model; 3) content creators compete for user engagement and,
without knowing their utility function in hindsight, apply arbitrary no-regret
learning algorithms to update their strategies. We study the user welfare
guarantee through the lens of Price of Anarchy and show that the fraction of
user welfare loss due to creator competition is always upper bounded by a small
constant depending on $K$ and randomness in user decisions; we also prove the
tightness of this bound. Our result discloses an intrinsic merit of the myopic
approach to the recommendation, i.e., relevance-driven matching performs
reasonably well in the long run, as long as users' decisions involve randomness
and the platform provides reasonably many alternatives to its users.
Related papers
- Algorithmic Content Selection and the Impact of User Disengagement [19.14804091327051]
We introduce a model for the content selection problem where dissatisfied users may disengage.
We show that when the relationship between each arm's expected reward and effect on user satisfaction are linearly related, an optimal content selection policy can be computed efficiently.
arXiv Detail & Related papers (2024-10-17T00:43:06Z) - Measuring Strategization in Recommendation: Users Adapt Their Behavior to Shape Future Content [66.71102704873185]
We test for user strategization by conducting a lab experiment and survey.
We find strong evidence of strategization across outcome metrics, including participants' dwell time and use of "likes"
Our findings suggest that platforms cannot ignore the effect of their algorithms on user behavior.
arXiv Detail & Related papers (2024-05-09T07:36:08Z) - Matching of Users and Creators in Two-Sided Markets with Departures [0.6649753747542209]
We propose a model of content recommendation that focuses on the dynamics of user-content matching.
We show that a user-centric greedy algorithm that does not consider creator departures can result in arbitrarily poor total engagement.
We present two practical algorithms, one with performance guarantees under mild assumptions on user preferences, and another that tends to outperform algorithms that ignore two-sided departures in practice.
arXiv Detail & Related papers (2023-12-30T20:13:28Z) - User Strategization and Trustworthy Algorithms [81.82279667028423]
We show that user strategization can actually help platforms in the short term.
We then show that it corrupts platforms' data and ultimately hurts their ability to make counterfactual decisions.
arXiv Detail & Related papers (2023-12-29T16:09:42Z) - Preferences Evolve And So Should Your Bandits: Bandits with Evolving
States for Online Platforms [12.368291979686122]
We propose a model for learning with bandit feedback while accounting for deterministically evolving and unobservable states.
The workhorse applications of our model are learning for recommendation systems and learning for online ads.
arXiv Detail & Related papers (2023-07-21T15:43:32Z) - Online Learning in a Creator Economy [91.55437924091844]
We study the creator economy as a three-party game between the users, platform, and content creators.
We analyze two families of contracts: return-based contracts and feature-based contracts.
We show that under smoothness assumptions, the joint optimization of return-based contracts and recommendation policy provides a regret.
arXiv Detail & Related papers (2023-05-19T01:58:13Z) - Modeling Content Creator Incentives on Algorithm-Curated Platforms [76.53541575455978]
We study how algorithmic choices affect the existence and character of (Nash) equilibria in exposure games.
We propose tools for numerically finding equilibria in exposure games, and illustrate results of an audit on the MovieLens and LastFM datasets.
arXiv Detail & Related papers (2022-06-27T08:16:59Z) - Competing Bandits: The Perils of Exploration Under Competition [99.68537519404727]
We study the interplay between exploration and competition on online platforms.
We find that stark competition induces firms to commit to a "greedy" bandit algorithm that leads to low welfare.
We investigate two channels for weakening the competition: relaxing the rationality of users and giving one firm a first-mover advantage.
arXiv Detail & Related papers (2020-07-20T14:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.