Reinforced Negative Sampling over Knowledge Graph for Recommendation
- URL: http://arxiv.org/abs/2003.05753v1
- Date: Thu, 12 Mar 2020 12:44:30 GMT
- Title: Reinforced Negative Sampling over Knowledge Graph for Recommendation
- Authors: Xiang Wang, Yaokun Xu, Xiangnan He, Yixin Cao, Meng Wang, Tat-Seng
Chua
- Abstract summary: We develop a new negative sampling model, Knowledge Graph Policy Network (kgPolicy), which works as a reinforcement learning agent to explore high-quality negatives.
kgPolicy navigates from the target positive interaction, adaptively receives knowledge-aware negative signals, and ultimately yields a potential negative item to train the recommender.
- Score: 106.07209348727564
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Properly handling missing data is a fundamental challenge in recommendation.
Most present works perform negative sampling from unobserved data to supply the
training of recommender models with negative signals. Nevertheless, existing
negative sampling strategies, either static or adaptive ones, are insufficient
to yield high-quality negative samples --- both informative to model training
and reflective of user real needs. In this work, we hypothesize that item
knowledge graph (KG), which provides rich relations among items and KG
entities, could be useful to infer informative and factual negative samples.
Towards this end, we develop a new negative sampling model, Knowledge Graph
Policy Network (KGPolicy), which works as a reinforcement learning agent to
explore high-quality negatives. Specifically, by conducting our designed
exploration operations, it navigates from the target positive interaction,
adaptively receives knowledge-aware negative signals, and ultimately yields a
potential negative item to train the recommender. We tested on a matrix
factorization (MF) model equipped with KGPolicy, and it achieves significant
improvements over both state-of-the-art sampling methods like DNS and IRGAN,
and KG-enhanced recommender models like KGAT. Further analyses from different
angles provide insights of knowledge-aware sampling. We release the codes and
datasets at https://github.com/xiangwang1223/kgpolicy.
Related papers
- TDCGL: Two-Level Debiased Contrastive Graph Learning for Recommendation [1.5836776102398225]
Long-tailed distribution of entities of KG and noise issues in the real world make item-entity dependent relations deviate from reflecting true characteristics.
We design the Two-Level Debiased Contrastive Learning (TDCL) and deploy it in the knowledge graph.
Considerable experiments on open-source datasets demonstrate that our method has excellent anti-noise capability.
arXiv Detail & Related papers (2023-10-01T03:56:38Z) - Neighborhood-based Hard Negative Mining for Sequential Recommendation [14.66576013324401]
Negative sampling plays a crucial role in training successful sequential recommendation models.
We propose a Graph-based Negative sampling approach based on Neighborhood Overlap (GNNO) to exploit structural information hidden in user behaviors for negative mining.
arXiv Detail & Related papers (2023-06-12T12:28:54Z) - Generating Negative Samples for Sequential Recommendation [83.60655196391855]
We propose to Generate Negative Samples (items) for Sequential Recommendation (SR)
A negative item is sampled at each time step based on the current SR model's learned user preferences toward items.
Experiments on four public datasets verify the importance of providing high-quality negative samples for SR.
arXiv Detail & Related papers (2022-08-07T05:44:13Z) - Language Model-driven Negative Sampling [8.299192665823542]
Knowledge Graph Embeddings (KGEs) encode the entities and relations of a knowledge graph (KG) into a vector space with a purpose of representation learning and reasoning for an ultimate downstream task (i.e., link prediction, question answering)
Since KGEs follow closed-world assumption and assume all the present facts in KGs to be positive (correct), they also require negative samples as a counterpart for learning process for truthfulness test of existing triples.
We propose an approach for generating negative sampling considering the existing rich textual knowledge in KGs.
arXiv Detail & Related papers (2022-03-09T13:27:47Z) - DSKReG: Differentiable Sampling on Knowledge Graph for Recommendation
with Relational GNN [59.160401038969795]
We propose differentiable sampling on Knowledge Graph for Recommendation with GNN (DSKReG)
We devise a differentiable sampling strategy, which enables the selection of relevant items to be jointly optimized with the model training procedure.
The experimental results demonstrate that our model outperforms state-of-the-art KG-based recommender systems.
arXiv Detail & Related papers (2021-08-26T16:19:59Z) - Rethinking InfoNCE: How Many Negative Samples Do You Need? [54.146208195806636]
We study how many negative samples are optimal for InfoNCE in different scenarios via a semi-quantitative theoretical framework.
We estimate the optimal negative sampling ratio using the $K$ value that maximizes the training effectiveness function.
arXiv Detail & Related papers (2021-05-27T08:38:29Z) - Negative Data Augmentation [127.28042046152954]
We show that negative data augmentation samples provide information on the support of the data distribution.
We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator.
Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities.
arXiv Detail & Related papers (2021-02-09T20:28:35Z) - Understanding Negative Sampling in Graph Representation Learning [87.35038268508414]
We show that negative sampling is as important as positive sampling in determining the optimization objective and the resulted variance.
We propose Metropolis-Hastings (MCNS) to approximate the positive distribution with self-contrast approximation and accelerate negative sampling by Metropolis-Hastings.
We evaluate our method on 5 datasets that cover extensive downstream graph learning tasks, including link prediction, node classification and personalized recommendation.
arXiv Detail & Related papers (2020-05-20T06:25:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.