Explicit User Manipulation in Reinforcement Learning Based Recommender
Systems
- URL: http://arxiv.org/abs/2203.10629v1
- Date: Sun, 20 Mar 2022 19:03:18 GMT
- Title: Explicit User Manipulation in Reinforcement Learning Based Recommender
Systems
- Authors: Matthew Sparr
- Abstract summary: Reinforcement learning based recommender systems can learn to influence users if that means maximizing clicks, engagement, or consumption.
Social media has been shown to be a contributing factor to increased political polarization.
explicit user manipulation in which the beliefs and opinions of users are tailored towards a certain end emerges as a significant concern.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recommender systems are highly prevalent in the modern world due to their
value to both users and platforms and services that employ them. Generally,
they can improve the user experience and help to increase satisfaction, but
they do not come without risks. One such risk is that of their effect on users
and their ability to play an active role in shaping user preferences. This risk
is more significant for reinforcement learning based recommender systems. These
are capable of learning for instance, how recommended content shown to a user
today may tamper that user's preference for other content recommended in the
future. Reinforcement learning based recommendation systems can thus implicitly
learn to influence users if that means maximizing clicks, engagement, or
consumption. On social news and media platforms, in particular, this type of
behavior is cause for alarm. Social media undoubtedly plays a role in public
opinion and has been shown to be a contributing factor to increased political
polarization. Recommender systems on such platforms, therefore, have great
potential to influence users in undesirable ways. However, it may also be
possible for this form of manipulation to be used intentionally. With
advancements in political opinion dynamics modeling and larger collections of
user data, explicit user manipulation in which the beliefs and opinions of
users are tailored towards a certain end emerges as a significant concern in
reinforcement learning based recommender systems.
Related papers
- Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems [3.990406494980651]
This study introduces an interactive tool designed to help users comprehend and explore the impacts of algorithmic harms in recommender systems.
By leveraging visualizations, counterfactual explanations, and interactive modules, the tool allows users to investigate how biases such as miscalibration affect their recommendations.
arXiv Detail & Related papers (2024-09-10T23:58:27Z) - System-2 Recommenders: Disentangling Utility and Engagement in Recommendation Systems via Temporal Point-Processes [80.97898201876592]
We propose a generative model in which past content interactions impact the arrival rates of users based on a self-exciting Hawkes process.
We show analytically that given samples it is possible to disentangle System-1 and System-2 and allow content optimization based on user utility.
arXiv Detail & Related papers (2024-05-29T18:19:37Z) - Measuring Strategization in Recommendation: Users Adapt Their Behavior to Shape Future Content [66.71102704873185]
We test for user strategization by conducting a lab experiment and survey.
We find strong evidence of strategization across outcome metrics, including participants' dwell time and use of "likes"
Our findings suggest that platforms cannot ignore the effect of their algorithms on user behavior.
arXiv Detail & Related papers (2024-05-09T07:36:08Z) - User-Controllable Recommendation via Counterfactual Retrospective and
Prospective Explanations [96.45414741693119]
We present a user-controllable recommender system that seamlessly integrates explainability and controllability.
By providing both retrospective and prospective explanations through counterfactual reasoning, users can customize their control over the system.
arXiv Detail & Related papers (2023-08-02T01:13:36Z) - Recommending to Strategic Users [10.079698681921673]
We show that users strategically choose content to influence the types of content they get recommended in the future.
We propose three interventions that may improve recommendation quality when taking into account strategic consumption.
arXiv Detail & Related papers (2023-02-13T17:57:30Z) - Influential Recommender System [12.765277278599541]
We present Influential Recommender System (IRS), a new recommendation paradigm that aims to proactively lead a user to like a given objective item.
IRS progressively recommends to the user a sequence of carefully selected items (called an influence path)
We show that IRN significantly outperforms the baseline recommenders and demonstrates its capability of influencing users' interests.
arXiv Detail & Related papers (2022-11-18T03:04:45Z) - Personalizing Intervened Network for Long-tailed Sequential User
Behavior Modeling [66.02953670238647]
Tail users suffer from significantly lower-quality recommendation than the head users after joint training.
A model trained on tail users separately still achieve inferior results due to limited data.
We propose a novel approach that significantly improves the recommendation performance of the tail users.
arXiv Detail & Related papers (2022-08-19T02:50:19Z) - Estimating and Penalizing Induced Preference Shifts in Recommender
Systems [10.052697877248601]
We argue that system designers should: estimate the shifts a recommender would induce; evaluate whether such shifts would be undesirable; and even actively optimize to avoid problematic shifts.
We do this by using historical user interaction data to train predictive user model which implicitly contains their preference dynamics.
In simulated experiments, we show that our learned preference dynamics model is effective in estimating user preferences and how they would respond to new recommenders.
arXiv Detail & Related papers (2022-04-25T21:04:46Z) - Causal Disentanglement with Network Information for Debiased
Recommendations [34.698181166037564]
Recent research proposes to debias by modeling a recommender system from a causal perspective.
The critical challenge in this setting is accounting for the hidden confounders.
We propose to leverage network information (i.e., user-social and user-item networks) to better approximate hidden confounders.
arXiv Detail & Related papers (2022-04-14T20:55:11Z) - Generative Inverse Deep Reinforcement Learning for Online Recommendation [62.09946317831129]
We propose a novel inverse reinforcement learning approach, namely InvRec, for online recommendation.
InvRec extracts the reward function from user's behaviors automatically, for online recommendation.
arXiv Detail & Related papers (2020-11-04T12:12:25Z) - Empowering Active Learning to Jointly Optimize System and User Demands [70.66168547821019]
We propose a new active learning approach that jointly optimize the active learning system (training efficiently) and the user (receiving useful instances)
We study our approach in an educational application, which particularly benefits from this technique as the system needs to rapidly learn to predict the appropriateness of an exercise to a particular user.
We evaluate multiple learning strategies and user types with data from real users and find that our joint approach better satisfies both objectives when alternative methods lead to many unsuitable exercises for end users.
arXiv Detail & Related papers (2020-05-09T16:02:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.