Recommendation and Temptation
- URL: http://arxiv.org/abs/2412.10595v1
- Date: Fri, 13 Dec 2024 22:44:22 GMT
- Title: Recommendation and Temptation
- Authors: Md Sanzeed Anwar, Paramveer S. Dhillon, Grant Schoenebeck,
- Abstract summary: We propose a novel user model that accounts for dual-self behavior and develop an optimal recommendation strategy.
We evaluate our approach through both synthetic simulations and simulations based on real-world data from the MovieLens dataset.
- Score: 3.734925590025741
- License:
- Abstract: Traditional recommender systems based on utility maximization and revealed preferences often fail to capture users' dual-self nature, where consumption choices are driven by both long-term benefits (enrichment) and desire for instant gratification (temptation). Consequently, these systems may generate recommendations that fail to provide long-lasting satisfaction to users. To address this issue, we propose a novel user model that accounts for this dual-self behavior and develop an optimal recommendation strategy to maximize enrichment from consumption. We highlight the limitations of historical consumption data in implementing this strategy and present an estimation framework that makes minimal assumptions and leverages explicit user feedback and implicit choice data to overcome these constraints. We evaluate our approach through both synthetic simulations and simulations based on real-world data from the MovieLens dataset. Results demonstrate that our proposed recommender can deliver superior enrichment compared to several competitive baseline algorithms that assume a single utility type and rely solely on revealed preferences. Our work emphasizes the critical importance of optimizing for enrichment in recommender systems, particularly in temptation-laden consumption contexts. Our findings have significant implications for content platforms, user experience design, and the development of responsible AI systems, paving the way for more nuanced and user-centric recommendation approaches.
Related papers
- Interactive Visualization Recommendation with Hier-SUCB [52.11209329270573]
We propose an interactive personalized visualization recommendation (PVisRec) system that learns on user feedback from previous interactions.
For more interactive and accurate recommendations, we propose Hier-SUCB, a contextual semi-bandit in the PVisRec setting.
arXiv Detail & Related papers (2025-02-05T17:14:45Z) - Reason4Rec: Large Language Models for Recommendation with Deliberative User Preference Alignment [69.11529841118671]
We propose a new Deliberative Recommendation task, which incorporates explicit reasoning about user preferences as an additional alignment goal.
We then introduce the Reasoning-powered Recommender framework for deliberative user preference alignment.
arXiv Detail & Related papers (2025-02-04T07:17:54Z) - Learning Recommender Systems with Soft Target: A Decoupled Perspective [49.83787742587449]
We propose a novel decoupled soft label optimization framework to consider the objectives as two aspects by leveraging soft labels.
We present a sensible soft-label generation algorithm that models a label propagation algorithm to explore users' latent interests in unobserved feedback via neighbors.
arXiv Detail & Related papers (2024-10-09T04:20:15Z) - The MovieLens Beliefs Dataset: Collecting Pre-Choice Data for Online Recommender Systems [0.0]
This paper introduces a method for collecting user beliefs about unexperienced items - a critical predictor of choice behavior.
We implement this method on the MovieLens platform, resulting in a rich dataset that combines user ratings, beliefs, and observed recommendations.
arXiv Detail & Related papers (2024-05-17T19:06:06Z) - Debiased Recommendation with Neural Stratification [19.841871819722016]
We propose to cluster the users for computing more accurate IPS via increasing the exposure densities.
We conduct extensive experiments based on real-world datasets to demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-08-15T15:45:35Z) - CausPref: Causal Preference Learning for Out-of-Distribution
Recommendation [36.22965012642248]
The current recommender system is still vulnerable to the distribution shift of users and items in realistic scenarios.
We propose to incorporate the recommendation-specific DAG learner into a novel causal preference-based recommendation framework named CausPref.
Our approach surpasses the benchmark models significantly under types of out-of-distribution settings.
arXiv Detail & Related papers (2022-02-08T16:42:03Z) - PURS: Personalized Unexpected Recommender System for Improving User
Satisfaction [76.98616102965023]
We describe a novel Personalized Unexpected Recommender System (PURS) model that incorporates unexpectedness into the recommendation process.
Extensive offline experiments on three real-world datasets illustrate that the proposed PURS model significantly outperforms the state-of-the-art baseline approaches.
arXiv Detail & Related papers (2021-06-05T01:33:21Z) - Latent Unexpected Recommendations [89.2011481379093]
We propose to model unexpectedness in the latent space of user and item embeddings, which allows to capture hidden and complex relations between new recommendations and historic purchases.
In addition, we develop a novel Latent Closure (LC) method to construct hybrid utility function and provide unexpected recommendations based on the proposed model.
arXiv Detail & Related papers (2020-07-27T02:39:30Z) - Convolutional Gaussian Embeddings for Personalized Recommendation with
Uncertainty [17.258674767363345]
Most existing embedding based recommendation models use embeddings corresponding to a single fixed point in low-dimensional space.
We propose a unified deep recommendation framework employing Gaussian embeddings, which are proven adaptive to uncertain preferences.
Our framework adopts Monte-Carlo sampling and convolutional neural networks to compute the correlation between the objective user and the candidate item.
arXiv Detail & Related papers (2020-06-19T02:10:38Z) - Reward Constrained Interactive Recommendation with Natural Language
Feedback [158.8095688415973]
We propose a novel constraint-augmented reinforcement learning (RL) framework to efficiently incorporate user preferences over time.
Specifically, we leverage a discriminator to detect recommendations violating user historical preference.
Our proposed framework is general and is further extended to the task of constrained text generation.
arXiv Detail & Related papers (2020-05-04T16:23:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.