Algorithmic Content Selection and the Impact of User Disengagement
- URL: http://arxiv.org/abs/2410.13108v2
- Date: Wed, 19 Feb 2025 22:50:47 GMT
- Title: Algorithmic Content Selection and the Impact of User Disengagement
- Authors: Emilio Calvano, Nika Haghtalab, Ellen Vitercik, Eric Zhao,
- Abstract summary: Digital services face a fundamental trade-off in content selection.
They must balance the immediate revenue gained from high-reward content against the long-term benefits of maintaining user engagement.
- Score: 19.14804091327051
- License:
- Abstract: Digital services face a fundamental trade-off in content selection: they must balance the immediate revenue gained from high-reward content against the long-term benefits of maintaining user engagement. Traditional multi-armed bandit models assume that users remain perpetually engaged, failing to capture the possibility that users may disengage when dissatisfied, thereby reducing future revenue potential. In this work, we introduce a model for the content selection problem that explicitly accounts for variable user engagement and disengagement. In our framework, content that maximizes immediate reward is not necessarily optimal in terms of fostering sustained user engagement. Our contributions are twofold. First, we develop computational and statistical methods for offline optimization and online learning of content selection policies. For users whose engagement patterns are defined by $k$ distinct levels, we design a dynamic programming algorithm that computes the exact optimal policy in $O(k^2)$ time. Moreover, we derive no-regret learning guarantees for an online learning setting in which the platform serves a series of users with unknown and potentially adversarial engagement patterns. Second, we introduce the concept of modified demand elasticity which captures how small changes in a user's overall satisfaction affect the platform's ability to secure long-term revenue. This notion generalizes classical demand elasticity by incorporating the dynamics of user re-engagement, thereby revealing key insights into the interplay between engagement and revenue. Notably, our analysis uncovers a counterintuitive phenomenon: although higher friction (i.e., a reduced likelihood of re-engagement) typically lowers overall revenue, it can simultaneously lead to higher user engagement under optimal content selection policies.
Related papers
- Unveiling User Satisfaction and Creator Productivity Trade-Offs in Recommendation Platforms [68.51708490104687]
We show that a purely relevance-driven policy with low exploration strength boosts short-term user satisfaction but undermines the long-term richness of the content pool.
Our findings reveal a fundamental trade-off between immediate user satisfaction and overall content production on platforms.
arXiv Detail & Related papers (2024-10-31T07:19:22Z) - User Welfare Optimization in Recommender Systems with Competing Content Creators [65.25721571688369]
In this study, we perform system-side user welfare optimization under a competitive game setting among content creators.
We propose an algorithmic solution for the platform, which dynamically computes a sequence of weights for each user based on their satisfaction of the recommended content.
These weights are then utilized to design mechanisms that adjust the recommendation policy or the post-recommendation rewards, thereby influencing creators' content production strategies.
arXiv Detail & Related papers (2024-04-28T21:09:52Z) - Ad-load Balancing via Off-policy Learning in a Content Marketplace [9.783697404304025]
Ad-load balancing is a critical challenge in online advertising systems, particularly in the context of social media platforms.
Traditional approaches to ad-load balancing rely on static allocation policies, which fail to adapt to changing user preferences and contextual factors.
We present an approach that leverages off-policy learning and evaluation from logged bandit feedback.
arXiv Detail & Related papers (2023-09-19T09:17:07Z) - Online Learning in a Creator Economy [91.55437924091844]
We study the creator economy as a three-party game between the users, platform, and content creators.
We analyze two families of contracts: return-based contracts and feature-based contracts.
We show that under smoothness assumptions, the joint optimization of return-based contracts and recommendation policy provides a regret.
arXiv Detail & Related papers (2023-05-19T01:58:13Z) - Structured Dynamic Pricing: Optimal Regret in a Global Shrinkage Model [50.06663781566795]
We consider a dynamic model with the consumers' preferences as well as price sensitivity varying over time.
We measure the performance of a dynamic pricing policy via regret, which is the expected revenue loss compared to a clairvoyant that knows the sequence of model parameters in advance.
Our regret analysis results not only demonstrate optimality of the proposed policy but also show that for policy planning it is essential to incorporate available structural information.
arXiv Detail & Related papers (2023-03-28T00:23:23Z) - How Bad is Top-$K$ Recommendation under Competing Content Creators? [43.2268992294178]
We study the user welfare guarantee through the lens of Price of Anarchy.
We show that the fraction of user welfare loss due to creator competition is always upper bounded by a small constant depending on $K$ and randomness in user decisions.
arXiv Detail & Related papers (2023-02-03T19:37:35Z) - Personalizing Intervened Network for Long-tailed Sequential User
Behavior Modeling [66.02953670238647]
Tail users suffer from significantly lower-quality recommendation than the head users after joint training.
A model trained on tail users separately still achieve inferior results due to limited data.
We propose a novel approach that significantly improves the recommendation performance of the tail users.
arXiv Detail & Related papers (2022-08-19T02:50:19Z) - Reliable Decision from Multiple Subtasks through Threshold Optimization:
Content Moderation in the Wild [7.176020195419459]
Social media platforms struggle to protect users from harmful content through content moderation.
These platforms have recently leveraged machine learning models to cope with the vast amount of user-generated content daily.
Third-party content moderation services provide prediction scores of multiple subtasks, such as predicting the existence of underage personnel, rude gestures, or weapons.
We introduce a simple yet effective threshold optimization method that searches the optimal thresholds of the multiple subtasks to make a reliable moderation decision in a cost-effective way.
arXiv Detail & Related papers (2022-08-16T03:51:43Z) - Online Learning Demands in Max-min Fairness [91.37280766977923]
We describe mechanisms for the allocation of a scarce resource among multiple users in a way that is efficient, fair, and strategy-proof.
The mechanism is repeated for multiple rounds and a user's requirements can change on each round.
At the end of each round, users provide feedback about the allocation they received, enabling the mechanism to learn user preferences over time.
arXiv Detail & Related papers (2020-12-15T22:15:20Z) - Maximizing Cumulative User Engagement in Sequential Recommendation: An
Online Optimization Perspective [26.18096797120916]
It is often needed to tradeoff two potentially conflicting objectives, that is, pursuing higher immediate user engagement and encouraging user browsing.
We propose a flexible and practical framework to explicitly tradeoff longer user browsing length and high immediate user engagement.
This approach is deployed at a large E-commerce platform, achieved over 7% improvement of cumulative clicks.
arXiv Detail & Related papers (2020-06-02T09:02:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.