Preference Dynamics Under Personalized Recommendations
- URL: http://arxiv.org/abs/2205.13026v1
- Date: Wed, 25 May 2022 19:29:53 GMT
- Title: Preference Dynamics Under Personalized Recommendations
- Authors: Sarah Dean and Jamie Morgenstern
- Abstract summary: We show whether some phenomenon akin to polarization occurs when users receive personalized content recommendations.
A more interesting objective is to understand under what conditions a recommendation algorithm can ensure stationarity of user's preferences.
- Score: 12.89628003097857
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many projects (both practical and academic) have designed algorithms to match
users to content they will enjoy under the assumption that user's preferences
and opinions do not change with the content they see. Evidence suggests that
individuals' preferences are directly shaped by what content they see --
radicalization, rabbit holes, polarization, and boredom are all example
phenomena of preferences affected by content. Polarization in particular can
occur even in ecosystems with "mass media," where no personalization takes
place, as recently explored in a natural model of preference dynamics
by~\citet{hkazla2019geometric} and~\citet{gaitonde2021polarization}. If all
users' preferences are drawn towards content they already like, or are repelled
from content they already dislike, uniform consumption of media leads to a
population of heterogeneous preferences converging towards only two poles.
In this work, we explore whether some phenomenon akin to polarization occurs
when users receive \emph{personalized} content recommendations. We use a
similar model of preference dynamics, where an individual's preferences move
towards content the consume and enjoy, and away from content they consume and
dislike. We show that standard user reward maximization is an almost trivial
goal in such an environment (a large class of simple algorithms will achieve
only constant regret). A more interesting objective, then, is to understand
under what conditions a recommendation algorithm can ensure stationarity of
user's preferences. We show how to design a content recommendations which can
achieve approximate stationarity, under mild conditions on the set of available
content, when a user's preferences are known, and how one can learn enough
about a user's preferences to implement such a strategy even when user
preferences are initially unknown.
Related papers
- ComPO: Community Preferences for Language Model Personalization [122.54846260663922]
ComPO is a method to personalize preference optimization in language models.
We collect and release ComPRed, a question answering dataset with community-level preferences from Reddit.
arXiv Detail & Related papers (2024-10-21T14:02:40Z) - DegustaBot: Zero-Shot Visual Preference Estimation for Personalized Multi-Object Rearrangement [53.86523017756224]
We present DegustaBot, an algorithm for visual preference learning that solves household multi-object rearrangement tasks according to personal preference.
We collect a large dataset of naturalistic personal preferences in a simulated table-setting task.
We find that 50% of our model's predictions are likely to be found acceptable by at least 20% of people.
arXiv Detail & Related papers (2024-07-11T21:28:02Z) - Measuring Strategization in Recommendation: Users Adapt Their Behavior to Shape Future Content [66.71102704873185]
We test for user strategization by conducting a lab experiment and survey.
We find strong evidence of strategization across outcome metrics, including participants' dwell time and use of "likes"
Our findings suggest that platforms cannot ignore the effect of their algorithms on user behavior.
arXiv Detail & Related papers (2024-05-09T07:36:08Z) - A First Look at Selection Bias in Preference Elicitation for Recommendation [64.44255178199846]
We study the effect of selection bias in preference elicitation on the resulting recommendations.
A big hurdle is the lack of any publicly available dataset that has preference elicitation interactions.
We propose a simulation of a topic-based preference elicitation process.
arXiv Detail & Related papers (2024-05-01T14:56:56Z) - An Offer you Cannot Refuse? Trends in the Coerciveness of Amazon Book
Recommendations [0.0]
We use textitBarrier-to-Exit, a metric for how difficult it is for users to change preferences, to analyse a large dataset of Amazon Book Ratings from 1998 to 2018.
Our findings indicate a highly significant growth of Barrier-to-Exit over time, suggesting that it has become more difficult for the analysed subset of users to change their preferences.
arXiv Detail & Related papers (2023-10-21T16:32:38Z) - Collaborative filtering to capture AI user's preferences as norms [0.4640835690336652]
Current methods require too much user involvement and fail to capture true preferences.
We argue that a new perspective is required when constructing norms.
Inspired by recommender systems, we believe that collaborative filtering can offer a suitable approach.
arXiv Detail & Related papers (2023-08-01T15:14:23Z) - Recommending to Strategic Users [10.079698681921673]
We show that users strategically choose content to influence the types of content they get recommended in the future.
We propose three interventions that may improve recommendation quality when taking into account strategic consumption.
arXiv Detail & Related papers (2023-02-13T17:57:30Z) - Eliciting User Preferences for Personalized Multi-Objective Decision
Making through Comparative Feedback [76.7007545844273]
We propose a multi-objective decision making framework that accommodates different user preferences over objectives.
Our model consists of a Markov decision process with a vector-valued reward function, with each user having an unknown preference vector.
We suggest an algorithm that finds a nearly optimal policy for the user using a small number of comparison queries.
arXiv Detail & Related papers (2023-02-07T23:58:19Z) - Modeling Dynamic User Preference via Dictionary Learning for Sequential
Recommendation [133.8758914874593]
Capturing the dynamics in user preference is crucial to better predict user future behaviors because user preferences often drift over time.
Many existing recommendation algorithms -- including both shallow and deep ones -- often model such dynamics independently.
This paper considers the problem of embedding a user's sequential behavior into the latent space of user preferences.
arXiv Detail & Related papers (2022-04-02T03:23:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.