Hierarchical Reinforcement Learning for Modeling User Novelty-Seeking
Intent in Recommender Systems
- URL: http://arxiv.org/abs/2306.01476v1
- Date: Fri, 2 Jun 2023 12:02:23 GMT
- Title: Hierarchical Reinforcement Learning for Modeling User Novelty-Seeking
Intent in Recommender Systems
- Authors: Pan Li, Yuyan Wang, Ed H. Chi and Minmin Chen
- Abstract summary: We propose a novel hierarchical reinforcement learning-based method to model the hierarchical user novelty-seeking intent.
We further incorporate diversity and novelty-related measurement in the reward function of the hierarchical RL (HRL) agent to encourage user exploration.
- Score: 26.519571240032967
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recommending novel content, which expands user horizons by introducing them
to new interests, has been shown to improve users' long-term experience on
recommendation platforms \cite{chen2021values}. Users however are not
constantly looking to explore novel content. It is therefore crucial to
understand their novelty-seeking intent and adjust the recommendation policy
accordingly. Most existing literature models a user's propensity to choose
novel content or to prefer a more diverse set of recommendations at individual
interactions. Hierarchical structure, on the other hand, exists in a user's
novelty-seeking intent, which is manifested as a static and intrinsic user
preference for seeking novelty along with a dynamic session-based propensity.
To this end, we propose a novel hierarchical reinforcement learning-based
method to model the hierarchical user novelty-seeking intent, and to adapt the
recommendation policy accordingly based on the extracted user novelty-seeking
propensity. We further incorporate diversity and novelty-related measurement in
the reward function of the hierarchical RL (HRL) agent to encourage user
exploration \cite{chen2021values}. We demonstrate the benefits of explicitly
modeling hierarchical user novelty-seeking intent in recommendations through
extensive experiments on simulated and real-world datasets. In particular, we
demonstrate that the effectiveness of our proposed hierarchical RL-based method
lies in its ability to capture such hierarchically-structured intent. As a
result, the proposed HRL model achieves superior performance on several public
datasets, compared with state-of-art baselines.
Related papers
- Dual Contrastive Transformer for Hierarchical Preference Modeling in Sequential Recommendation [23.055217651991537]
Sequential recommender systems (SRSs) aim to predict the subsequent items which may interest users.
Most of existing SRSs often model users' single low-level preference based on item ID information.
We propose a novel hierarchical preference modeling framework to substantially model the complex low- and high-level preference dynamics.
arXiv Detail & Related papers (2024-10-30T08:09:33Z) - Hierarchical Reinforcement Learning for Temporal Abstraction of Listwise Recommendation [51.06031200728449]
We propose a novel framework called mccHRL to provide different levels of temporal abstraction on listwise recommendation.
Within the hierarchical framework, the high-level agent studies the evolution of user perception, while the low-level agent produces the item selection policy.
Results observe significant performance improvement by our method, compared with several well-known baselines.
arXiv Detail & Related papers (2024-09-11T17:01:06Z) - Latent User Intent Modeling for Sequential Recommenders [92.66888409973495]
Sequential recommender models learn to predict the next items a user is likely to interact with based on his/her interaction history on the platform.
Most sequential recommenders however lack a higher-level understanding of user intents, which often drive user behaviors online.
Intent modeling is thus critical for understanding users and optimizing long-term user experience.
arXiv Detail & Related papers (2022-11-17T19:00:24Z) - Intent Contrastive Learning for Sequential Recommendation [86.54439927038968]
We introduce a latent variable to represent users' intents and learn the distribution function of the latent variable via clustering.
We propose to leverage the learned intents into SR models via contrastive SSL, which maximizes the agreement between a view of sequence and its corresponding intent.
Experiments conducted on four real-world datasets demonstrate the superiority of the proposed learning paradigm.
arXiv Detail & Related papers (2022-02-05T09:24:13Z) - Reinforcement Learning based Path Exploration for Sequential Explainable
Recommendation [57.67616822888859]
We propose a novel Temporal Meta-path Guided Explainable Recommendation leveraging Reinforcement Learning (TMER-RL)
TMER-RL utilizes reinforcement item-item path modelling between consecutive items with attention mechanisms to sequentially model dynamic user-item evolutions on dynamic knowledge graph for explainable recommendation.
Extensive evaluations of TMER on two real-world datasets show state-of-the-art performance compared against recent strong baselines.
arXiv Detail & Related papers (2021-11-24T04:34:26Z) - D2RLIR : an improved and diversified ranking function in interactive
recommendation systems based on deep reinforcement learning [0.3058685580689604]
This paper proposes a deep reinforcement learning based recommendation system by utilizing Actor-Critic architecture.
The proposed model is able to generate a diverse while relevance recommendation list based on the user's preferences.
arXiv Detail & Related papers (2021-10-28T13:11:29Z) - Modeling User Behaviour in Research Paper Recommendation System [8.980876474818153]
A user intention model is proposed based on deep sequential topic analysis.
The model predicts a user's intention in terms of the topic of interest.
The proposed approach introduces a new road map to model a user activity suitable for the design of a research paper recommendation system.
arXiv Detail & Related papers (2021-07-16T11:31:03Z) - Offline Meta-level Model-based Reinforcement Learning Approach for
Cold-Start Recommendation [27.17948754183511]
Reinforcement learning has shown great promise in optimizing long-term user interest in recommender systems.
Existing RL-based recommendation methods need a large number of interactions for each user to learn a robust recommendation policy.
We propose a meta-level model-based reinforcement learning approach for fast user adaptation.
arXiv Detail & Related papers (2020-12-04T08:58:35Z) - Towards Open-World Recommendation: An Inductive Model-based
Collaborative Filtering Approach [115.76667128325361]
Recommendation models can effectively estimate underlying user interests and predict one's future behaviors.
We propose an inductive collaborative filtering framework that contains two representation models.
Our model achieves promising results for recommendation on few-shot users with limited training ratings and new unseen users.
arXiv Detail & Related papers (2020-07-09T14:31:25Z) - Reward Constrained Interactive Recommendation with Natural Language
Feedback [158.8095688415973]
We propose a novel constraint-augmented reinforcement learning (RL) framework to efficiently incorporate user preferences over time.
Specifically, we leverage a discriminator to detect recommendations violating user historical preference.
Our proposed framework is general and is further extended to the task of constrained text generation.
arXiv Detail & Related papers (2020-05-04T16:23:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.