Knowledge Graph Contrastive Learning for Recommendation
- URL: http://arxiv.org/abs/2205.00976v1
- Date: Mon, 2 May 2022 15:24:53 GMT
- Title: Knowledge Graph Contrastive Learning for Recommendation
- Authors: Yuhao Yang, Chao Huang, Lianghao Xia, Chenliang Li
- Abstract summary: We design a general Knowledge Graph Contrastive Learning framework to alleviate the information noise for knowledge graph-enhanced recommender systems.
Specifically, we propose a knowledge graph augmentation schema to suppress KG noise in information aggregation.
We exploit additional supervision signals from the KG augmentation process to guide a cross-view contrastive learning paradigm.
- Score: 32.918864602360884
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge Graphs (KGs) have been utilized as useful side information to
improve recommendation quality. In those recommender systems, knowledge graph
information often contains fruitful facts and inherent semantic relatedness
among items. However, the success of such methods relies on the high quality
knowledge graphs, and may not learn quality representations with two
challenges: i) The long-tail distribution of entities results in sparse
supervision signals for KG-enhanced item representation; ii) Real-world
knowledge graphs are often noisy and contain topic-irrelevant connections
between items and entities. Such KG sparsity and noise make the item-entity
dependent relations deviate from reflecting their true characteristics, which
significantly amplifies the noise effect and hinders the accurate
representation of user's preference.
To fill this research gap, we design a general Knowledge Graph Contrastive
Learning framework (KGCL) that alleviates the information noise for knowledge
graph-enhanced recommender systems. Specifically, we propose a knowledge graph
augmentation schema to suppress KG noise in information aggregation, and derive
more robust knowledge-aware representations for items. In addition, we exploit
additional supervision signals from the KG augmentation process to guide a
cross-view contrastive learning paradigm, giving a greater role to unbiased
user-item interactions in gradient descent and further suppressing the noise.
Extensive experiments on three public datasets demonstrate the consistent
superiority of our KGCL over state-of-the-art techniques. KGCL also achieves
strong performance in recommendation scenarios with sparse user-item
interactions, long-tail and noisy KG entities. Our implementation codes are
available at https://github.com/yuh-yang/KGCL-SIGIR22
Related papers
- Enhancing Graph Contrastive Learning with Reliable and Informative Augmentation for Recommendation [84.45144851024257]
CoGCL aims to enhance graph contrastive learning by constructing contrastive views with stronger collaborative information via discrete codes.
We introduce a multi-level vector quantizer in an end-to-end manner to quantize user and item representations into discrete codes.
For neighborhood structure, we propose virtual neighbor augmentation by treating discrete codes as virtual neighbors.
Regarding semantic relevance, we identify similar users/items based on shared discrete codes and interaction targets to generate the semantically relevant view.
arXiv Detail & Related papers (2024-09-09T14:04:17Z) - Heterogeneous Hypergraph Embedding for Recommendation Systems [45.49449132970778]
We present a novel Knowledge-enhanced Heterogeneous Hypergraph Recommender System (KHGRec)
KHGRec captures group-wise characteristics of both the interaction network and the KG, modeling complex connections in the KG.
It fuses signals from the input graphs with cross-view self-supervised learning and attention mechanisms.
arXiv Detail & Related papers (2024-07-04T06:09:11Z) - Knowledge Enhanced Multi-intent Transformer Network for Recommendation [11.53964363972865]
We propose a novel approach named Knowledge Enhanced Multi-intent Transformer Network for Recommendation (KGTN)
Global Intents with Graph Transformer focuses on capturing learnable user intents, by incorporating global signals from user-item-relation-entity interactions with a graph transformer.
Knowledge Contrastive Denoising under Intents is dedicated to learning precise and robust representations.
arXiv Detail & Related papers (2024-05-31T01:07:37Z) - Knowledge Graph Pruning for Recommendation [44.21660467094777]
We propose a novel approach called KGTrimmer for knowledge graph pruning tailored for recommendation.
For the collective view, we embrace the idea of collective intelligence by extracting community consensus based on abundant collaborative signals.
Next, we build an end-to-end importance-aware graph neural network, which injects filtered knowledge to enhance the distillation of valuable user-item collaborative signals.
arXiv Detail & Related papers (2024-05-19T12:07:24Z) - On the Sweet Spot of Contrastive Views for Knowledge-enhanced
Recommendation [49.18304766331156]
We propose a new contrastive learning framework for KG-enhanced recommendation.
We construct two separate contrastive views for KG and IG, and maximize their mutual information.
Extensive experimental results on three real-world datasets demonstrate the effectiveness and efficiency of our method.
arXiv Detail & Related papers (2023-09-23T14:05:55Z) - KRACL: Contrastive Learning with Graph Context Modeling for Sparse
Knowledge Graph Completion [37.92814873958519]
Knowledge Graph Embeddings (KGE) aim to map entities and relations to low dimensional spaces and have become the textitde-facto standard for knowledge graph completion.
Most existing KGE methods suffer from the sparsity challenge, where it is harder to predict entities that appear less frequently in knowledge graphs.
We propose a novel framework to alleviate the widespread sparsity in KGs with graph context and contrastive learning.
arXiv Detail & Related papers (2022-08-16T09:17:40Z) - Explainable Sparse Knowledge Graph Completion via High-order Graph
Reasoning Network [111.67744771462873]
This paper proposes a novel explainable model for sparse Knowledge Graphs (KGs)
It combines high-order reasoning into a graph convolutional network, namely HoGRN.
It can not only improve the generalization ability to mitigate the information insufficiency issue but also provide interpretability.
arXiv Detail & Related papers (2022-07-14T10:16:56Z) - Conditional Attention Networks for Distilling Knowledge Graphs in
Recommendation [74.14009444678031]
We propose Knowledge-aware Conditional Attention Networks (KCAN) to incorporate knowledge graph into a recommender system.
We use a knowledge-aware attention propagation manner to obtain the node representation first, which captures the global semantic similarity on the user-item network and the knowledge graph.
Then, by applying a conditional attention aggregation on the subgraph, we refine the knowledge graph to obtain target-specific node representations.
arXiv Detail & Related papers (2021-11-03T09:40:43Z) - DSKReG: Differentiable Sampling on Knowledge Graph for Recommendation
with Relational GNN [59.160401038969795]
We propose differentiable sampling on Knowledge Graph for Recommendation with GNN (DSKReG)
We devise a differentiable sampling strategy, which enables the selection of relevant items to be jointly optimized with the model training procedure.
The experimental results demonstrate that our model outperforms state-of-the-art KG-based recommender systems.
arXiv Detail & Related papers (2021-08-26T16:19:59Z) - Learning Intents behind Interactions with Knowledge Graph for
Recommendation [93.08709357435991]
Knowledge graph (KG) plays an increasingly important role in recommender systems.
Existing GNN-based models fail to identify user-item relation at a fine-grained level of intents.
We propose a new model, Knowledge Graph-based Intent Network (KGIN)
arXiv Detail & Related papers (2021-02-14T03:21:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.