Optimizing Long-term Social Welfare in Recommender Systems: A
Constrained Matching Approach
- URL: http://arxiv.org/abs/2008.00104v2
- Date: Tue, 18 Aug 2020 20:57:28 GMT
- Title: Optimizing Long-term Social Welfare in Recommender Systems: A
Constrained Matching Approach
- Authors: Martin Mladenov, Elliot Creager, Omer Ben-Porat, Kevin Swersky,
Richard Zemel, Craig Boutilier
- Abstract summary: We study settings in which content providers cannot remain viable unless they receive a certain level of user engagement.
Our model ensures the system reaches an equilibrium with maximal social welfare supported by a sufficiently diverse set of viable providers.
We draw connections to various notions of user regret and fairness, arguing that these outcomes are fairer in a utilitarian sense.
- Score: 36.54379845220444
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most recommender systems (RS) research assumes that a user's utility can be
maximized independently of the utility of the other agents (e.g., other users,
content providers). In realistic settings, this is often not true---the
dynamics of an RS ecosystem couple the long-term utility of all agents. In this
work, we explore settings in which content providers cannot remain viable
unless they receive a certain level of user engagement. We formulate the
recommendation problem in this setting as one of equilibrium selection in the
induced dynamical system, and show that it can be solved as an optimal
constrained matching problem. Our model ensures the system reaches an
equilibrium with maximal social welfare supported by a sufficiently diverse set
of viable providers. We demonstrate that even in a simple, stylized dynamical
RS model, the standard myopic approach to recommendation---always matching a
user to the best provider---performs poorly. We develop several scalable
techniques to solve the matching problem, and also draw connections to various
notions of user regret and fairness, arguing that these outcomes are fairer in
a utilitarian sense.
Related papers
- Large Language Model Empowered Embedding Generator for Sequential Recommendation [57.49045064294086]
Large Language Model (LLM) has the potential to understand the semantic connections between items, regardless of their popularity.
We present LLMEmb, an innovative technique that harnesses LLM to create item embeddings that bolster the performance of Sequential Recommender Systems.
arXiv Detail & Related papers (2024-09-30T03:59:06Z) - Content Prompting: Modeling Content Provider Dynamics to Improve User
Welfare in Recommender Ecosystems [14.416231654089994]
We tackle this information asymmetry with content prompting policies.
A content prompt is a hint or suggestion to a provider to make available novel content for which the RS predicts unmet user demand.
We aim to determine a joint prompting policy that induces a set of providers to make content available that optimize user social welfare in equilibrium.
arXiv Detail & Related papers (2023-09-02T13:35:11Z) - Ensuring User-side Fairness in Dynamic Recommender Systems [37.20838165555877]
This paper presents the first principled study on ensuring user-side fairness in dynamic recommender systems.
We propose FAir Dynamic rEcommender (FADE), an end-to-end fine-tuning framework to dynamically ensure user-side fairness over time.
We show that FADE effectively and efficiently reduces performance disparities with little sacrifice in the overall recommendation performance.
arXiv Detail & Related papers (2023-08-29T22:03:17Z) - Incentive-Aware Recommender Systems in Two-Sided Markets [49.692453629365204]
We propose a novel recommender system that aligns with agents' incentives while achieving myopically optimal performance.
Our framework models this incentive-aware system as a multi-agent bandit problem in two-sided markets.
Both algorithms satisfy an ex-post fairness criterion, which protects agents from over-exploitation.
arXiv Detail & Related papers (2022-11-23T22:20:12Z) - Modelling the Recommender Alignment Problem [0.0]
This work aims to shed light on how an end-to-end study of reward functions for recommender systems might be done.
We learn recommender policies that optimize reward functions by controlling graph dynamics on a toy environment.
Based on the effects that trained recommenders have on their environment, we conclude that engagement maximizers generally lead to worse outcomes than aligned recommenders but not always.
arXiv Detail & Related papers (2022-08-25T18:37:49Z) - Interactive Recommendations for Optimal Allocations in Markets with
Constraints [12.580391999838128]
We propose an interactive framework where the system provider can enhance the quality of recommendations to the users.
We employ an integrated approach using techniques from collaborative filtering, bandits, and optimal resource allocation.
Empirical studies on synthetic matrix and real-world data also demonstrate the effectiveness and performance of our approach.
arXiv Detail & Related papers (2022-07-08T22:16:51Z) - Modeling Attrition in Recommender Systems with Departing Bandits [84.85560764274399]
We propose a novel multi-armed bandit setup that captures policy-dependent horizons.
We first address the case where all users share the same type, demonstrating that a recent UCB-based algorithm is optimal.
We then move forward to the more challenging case, where users are divided among two types.
arXiv Detail & Related papers (2022-03-25T02:30:54Z) - PURS: Personalized Unexpected Recommender System for Improving User
Satisfaction [76.98616102965023]
We describe a novel Personalized Unexpected Recommender System (PURS) model that incorporates unexpectedness into the recommendation process.
Extensive offline experiments on three real-world datasets illustrate that the proposed PURS model significantly outperforms the state-of-the-art baseline approaches.
arXiv Detail & Related papers (2021-06-05T01:33:21Z) - When and Whom to Collaborate with in a Changing Environment: A
Collaborative Dynamic Bandit Solution [36.76450390135742]
Collaborative bandit algorithms utilize collaborative filtering techniques to improve sample efficiency in online interactive recommendation.
All existing collaborative bandit learning solutions impose a stationary assumption about the environment.
We develop a collaborative dynamic bandit solution to handle changing environment for recommendation.
arXiv Detail & Related papers (2021-04-14T22:15:58Z) - Online Learning Demands in Max-min Fairness [91.37280766977923]
We describe mechanisms for the allocation of a scarce resource among multiple users in a way that is efficient, fair, and strategy-proof.
The mechanism is repeated for multiple rounds and a user's requirements can change on each round.
At the end of each round, users provide feedback about the allocation they received, enabling the mechanism to learn user preferences over time.
arXiv Detail & Related papers (2020-12-15T22:15:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.