Understanding or Manipulation: Rethinking Online Performance Gains of
Modern Recommender Systems
- URL: http://arxiv.org/abs/2210.05662v2
- Date: Mon, 18 Dec 2023 14:13:03 GMT
- Title: Understanding or Manipulation: Rethinking Online Performance Gains of
Modern Recommender Systems
- Authors: Zhengbang Zhu, Rongjun Qin, Junjie Huang, Xinyi Dai, Yang Yu, Yong Yu
and Weinan Zhang
- Abstract summary: We present a framework for benchmarking the degree of manipulations of recommendation algorithms.
We find that a high online click-through rate does not necessarily mean a better understanding of user initial preference.
We advocate that future recommendation algorithm studies should be treated as an optimization problem with constrained user preference manipulations.
- Score: 38.75457258877731
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recommender systems are expected to be assistants that help human users find
relevant information automatically without explicit queries. As recommender
systems evolve, increasingly sophisticated learning techniques are applied and
have achieved better performance in terms of user engagement metrics such as
clicks and browsing time. The increase in the measured performance, however,
can have two possible attributions: a better understanding of user preferences,
and a more proactive ability to utilize human bounded rationality to seduce
user over-consumption. A natural following question is whether current
recommendation algorithms are manipulating user preferences. If so, can we
measure the manipulation level? In this paper, we present a general framework
for benchmarking the degree of manipulations of recommendation algorithms, in
both slate recommendation and sequential recommendation scenarios. The
framework consists of four stages, initial preference calculation, training
data collection, algorithm training and interaction, and metrics calculation
that involves two proposed metrics. We benchmark some representative
recommendation algorithms in both synthetic and real-world datasets under the
proposed framework. We have observed that a high online click-through rate does
not necessarily mean a better understanding of user initial preference, but
ends in prompting users to choose more documents they initially did not favor.
Moreover, we find that the training data have notable impacts on the
manipulation degrees, and algorithms with more powerful modeling abilities are
more sensitive to such impacts. The experiments also verified the usefulness of
the proposed metrics for measuring the degree of manipulations. We advocate
that future recommendation algorithm studies should be treated as an
optimization problem with constrained user preference manipulations.
Related papers
- Dissertation: On the Theoretical Foundation of Model Comparison and Evaluation for Recommender System [4.76281731053599]
Recommender systems utilize users' historical data to infer customer interests and provide personalized recommendations.
Collaborative filtering is one family of recommendation algorithms that uses ratings from multiple users to predict missing ratings.
Recommender systems can be more complex and incorporate auxiliary data such as content-based attributes, user interactions, and contextual information.
arXiv Detail & Related papers (2024-11-04T06:31:52Z) - Quantifying User Coherence: A Unified Framework for Cross-Domain Recommendation Analysis [69.37718774071793]
This paper introduces novel information-theoretic measures for understanding recommender systems.
We evaluate 7 recommendation algorithms across 9 datasets, revealing the relationships between our measures and standard performance metrics.
arXiv Detail & Related papers (2024-10-03T13:02:07Z) - Algorithmic Drift: A Simulation Framework to Study the Effects of Recommender Systems on User Preferences [7.552217586057245]
We propose a simulation framework that mimics user-recommender system interactions in a long-term scenario.
We introduce two novel metrics for quantifying the algorithm's impact on user preferences, specifically in terms of drift over time.
arXiv Detail & Related papers (2024-09-24T21:54:22Z) - Pre-trained Language Model and Knowledge Distillation for Lightweight Sequential Recommendation [51.25461871988366]
We propose a sequential recommendation algorithm based on a pre-trained language model and knowledge distillation.
The proposed algorithm enhances recommendation accuracy and provide timely recommendation services.
arXiv Detail & Related papers (2024-09-23T08:39:07Z) - The Fault in Our Recommendations: On the Perils of Optimizing the Measurable [2.6217304977339473]
We show that optimizing for engagement can lead to significant utility losses.
We propose a utility-aware policy that initially recommends a mix of popular and niche content.
arXiv Detail & Related papers (2024-05-07T02:12:17Z) - Meta-Wrapper: Differentiable Wrapping Operator for User Interest
Selection in CTR Prediction [97.99938802797377]
Click-through rate (CTR) prediction, whose goal is to predict the probability of the user to click on an item, has become increasingly significant in recommender systems.
Recent deep learning models with the ability to automatically extract the user interest from his/her behaviors have achieved great success.
We propose a novel approach under the framework of the wrapper method, which is named Meta-Wrapper.
arXiv Detail & Related papers (2022-06-28T03:28:15Z) - On the Generalizability and Predictability of Recommender Systems [33.46314108814183]
We give the first large-scale study of recommender system approaches.
We create Reczilla, a meta-learning approach to recommender systems.
arXiv Detail & Related papers (2022-06-23T17:51:42Z) - Learning to Learn a Cold-start Sequential Recommender [70.5692886883067]
The cold-start recommendation is an urgent problem in contemporary online applications.
We propose a meta-learning based cold-start sequential recommendation framework called metaCSR.
metaCSR holds the ability to learn the common patterns from regular users' behaviors.
arXiv Detail & Related papers (2021-10-18T08:11:24Z) - Do Offline Metrics Predict Online Performance in Recommender Systems? [79.48653445643865]
We investigate the extent to which offline metrics predict online performance by evaluating recommenders across six simulated environments.
We observe that offline metrics are correlated with online performance over a range of environments.
We study the impact of adding exploration strategies, and observe that their effectiveness, when compared to greedy recommendation, is highly dependent on the recommendation algorithm.
arXiv Detail & Related papers (2020-11-07T01:41:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.