Solutions to preference manipulation in recommender systems require
knowledge of meta-preferences
- URL: http://arxiv.org/abs/2209.11801v1
- Date: Wed, 14 Sep 2022 15:01:13 GMT
- Title: Solutions to preference manipulation in recommender systems require
knowledge of meta-preferences
- Authors: Hal Ashton, Matija Franklin
- Abstract summary: Some preference changes on the part of the user are self-induced and desired whether the recommender caused them or not.
This paper proposes that solutions to preference manipulation in recommender systems must take into account certain meta-preferences.
- Score: 7.310043452300736
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Iterative machine learning algorithms used to power recommender systems often
change people's preferences by trying to learn them. Further a recommender can
better predict what a user will do by making its users more predictable. Some
preference changes on the part of the user are self-induced and desired whether
the recommender caused them or not. This paper proposes that solutions to
preference manipulation in recommender systems must take into account certain
meta-preferences (preferences over another preference) in order to respect the
autonomy of the user and not be manipulative.
Related papers
- Post-Userist Recommender Systems : A Manifesto [1.7157586976839874]
We define userist recommendation as an approach to recommender systems framed solely in terms of the relation between the user and system.
Post-userist recommendation posits a larger field of relations in which stakeholders are embedded and distinguishes the recommendation function from generative media.
arXiv Detail & Related papers (2024-10-09T03:16:37Z) - User Consented Federated Recommender System Against Personalized
Attribute Inference Attack [55.24441467292359]
We propose a user-consented federated recommendation system (UC-FedRec) to flexibly satisfy the different privacy needs of users.
UC-FedRec allows users to self-define their privacy preferences to meet various demands and makes recommendations with user consent.
arXiv Detail & Related papers (2023-12-23T09:44:57Z) - RecRec: Algorithmic Recourse for Recommender Systems [41.97186998947909]
It is crucial for all stakeholders to understand the model's rationale behind making certain predictions and recommendations.
This is especially true for the content providers whose livelihoods depend on the recommender system.
We propose a recourse framework for recommender systems, targeted towards the content providers.
arXiv Detail & Related papers (2023-08-28T22:26:50Z) - User-Controllable Recommendation via Counterfactual Retrospective and
Prospective Explanations [96.45414741693119]
We present a user-controllable recommender system that seamlessly integrates explainability and controllability.
By providing both retrospective and prospective explanations through counterfactual reasoning, users can customize their control over the system.
arXiv Detail & Related papers (2023-08-02T01:13:36Z) - Editable User Profiles for Controllable Text Recommendation [66.00743968792275]
We propose LACE, a novel concept value bottleneck model for controllable text recommendations.
LACE represents each user with a succinct set of human-readable concepts.
It learns personalized representations of the concepts based on user documents.
arXiv Detail & Related papers (2023-04-09T14:52:18Z) - Eliciting User Preferences for Personalized Multi-Objective Decision
Making through Comparative Feedback [76.7007545844273]
We propose a multi-objective decision making framework that accommodates different user preferences over objectives.
Our model consists of a Markov decision process with a vector-valued reward function, with each user having an unknown preference vector.
We suggest an algorithm that finds a nearly optimal policy for the user using a small number of comparison queries.
arXiv Detail & Related papers (2023-02-07T23:58:19Z) - Latent User Intent Modeling for Sequential Recommenders [92.66888409973495]
Sequential recommender models learn to predict the next items a user is likely to interact with based on his/her interaction history on the platform.
Most sequential recommenders however lack a higher-level understanding of user intents, which often drive user behaviors online.
Intent modeling is thus critical for understanding users and optimizing long-term user experience.
arXiv Detail & Related papers (2022-11-17T19:00:24Z) - Breaking Feedback Loops in Recommender Systems with Causal Inference [99.22185950608838]
Recent work has shown that feedback loops may compromise recommendation quality and homogenize user behavior.
We propose the Causal Adjustment for Feedback Loops (CAFL), an algorithm that provably breaks feedback loops using causal inference.
We show that CAFL improves recommendation quality when compared to prior correction methods.
arXiv Detail & Related papers (2022-07-04T17:58:39Z) - Estimating and Penalizing Induced Preference Shifts in Recommender
Systems [10.052697877248601]
We argue that system designers should: estimate the shifts a recommender would induce; evaluate whether such shifts would be undesirable; and even actively optimize to avoid problematic shifts.
We do this by using historical user interaction data to train predictive user model which implicitly contains their preference dynamics.
In simulated experiments, we show that our learned preference dynamics model is effective in estimating user preferences and how they would respond to new recommenders.
arXiv Detail & Related papers (2022-04-25T21:04:46Z) - Explicit User Manipulation in Reinforcement Learning Based Recommender
Systems [0.0]
Reinforcement learning based recommender systems can learn to influence users if that means maximizing clicks, engagement, or consumption.
Social media has been shown to be a contributing factor to increased political polarization.
explicit user manipulation in which the beliefs and opinions of users are tailored towards a certain end emerges as a significant concern.
arXiv Detail & Related papers (2022-03-20T19:03:18Z) - Deviation-Based Learning [5.304857921982131]
We propose deviation-based learning, a new approach to training recommender systems.
We show that learning frequently stalls if the recommender always recommends a choice.
Social welfare and the learning rate are improved drastically if the recommender abstains from recommending a choice.
arXiv Detail & Related papers (2021-09-20T19:51:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.