Knowledge-guided Deep Reinforcement Learning for Interactive
Recommendation
- URL: http://arxiv.org/abs/2004.08068v1
- Date: Fri, 17 Apr 2020 05:26:47 GMT
- Title: Knowledge-guided Deep Reinforcement Learning for Interactive
Recommendation
- Authors: Xiaocong Chen, Chaoran Huang, Lina Yao, Xianzhi Wang, Wei Liu, Wenjie
Zhang
- Abstract summary: Interactive recommendation aims to learn from dynamic interactions between items and users to achieve responsiveness and accuracy.
We propose Knowledge-Guided deep Reinforcement learning to harness the advantages of both reinforcement learning and knowledge graphs for interactive recommendation.
- Score: 49.32287384774351
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Interactive recommendation aims to learn from dynamic interactions between
items and users to achieve responsiveness and accuracy. Reinforcement learning
is inherently advantageous for coping with dynamic environments and thus has
attracted increasing attention in interactive recommendation research. Inspired
by knowledge-aware recommendation, we proposed Knowledge-Guided deep
Reinforcement learning (KGRL) to harness the advantages of both reinforcement
learning and knowledge graphs for interactive recommendation. This model is
implemented upon the actor-critic network framework. It maintains a local
knowledge network to guide decision-making and employs the attention mechanism
to capture long-term semantics between items. We have conducted comprehensive
experiments in a simulated online environment with six public real-world
datasets and demonstrated the superiority of our model over several
state-of-the-art methods.
Related papers
- Foundations of Reinforcement Learning and Interactive Decision Making [81.76863968810423]
We present a unifying framework for addressing the exploration-exploitation dilemma using frequentist and Bayesian approaches.
Special attention is paid to function approximation and flexible model classes such as neural networks.
arXiv Detail & Related papers (2023-12-27T21:58:45Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - A Broad-persistent Advising Approach for Deep Interactive Reinforcement
Learning in Robotic Environments [0.3683202928838613]
Deep Interactive Reinforcement Learning (DeepIRL) includes interactive feedback from an external trainer or expert giving advice to help learners choosing actions to speed up the learning process.
In this paper, we present Broad-persistent Advising (BPA), a broad-persistent advising approach that retains and reuses the processed information.
It not only helps trainers to give more general advice relevant to similar states instead of only the current state but also allows the agent to speed up the learning process.
arXiv Detail & Related papers (2021-10-15T10:56:00Z) - Recent Advances in Heterogeneous Relation Learning for Recommendation [5.390295867837705]
We review the development of recommendation frameworks with the focus on heterogeneous relational learning.
The objective of this task is to map heterogeneous relational data into latent representation space.
We discuss the learning approaches in each category, such as matrix factorization, attention mechanism and graph neural networks.
arXiv Detail & Related papers (2021-10-07T13:32:04Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - Generative Adversarial Reward Learning for Generalized Behavior Tendency
Inference [71.11416263370823]
We propose a generative inverse reinforcement learning for user behavioral preference modelling.
Our model can automatically learn the rewards from user's actions based on discriminative actor-critic network and Wasserstein GAN.
arXiv Detail & Related papers (2021-05-03T13:14:25Z) - Generative Inverse Deep Reinforcement Learning for Online Recommendation [62.09946317831129]
We propose a novel inverse reinforcement learning approach, namely InvRec, for online recommendation.
InvRec extracts the reward function from user's behaviors automatically, for online recommendation.
arXiv Detail & Related papers (2020-11-04T12:12:25Z) - A Conceptual Framework for Externally-influenced Agents: An Assisted
Reinforcement Learning Review [10.73121872355072]
We propose a conceptual framework and taxonomy for assisted reinforcement learning.
The proposed taxonomy details the relationship between the external information source and the learner agent.
We identify current streams of reinforcement learning that use external information to improve the agent's performance.
arXiv Detail & Related papers (2020-07-03T08:07:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.