KRACL: Contrastive Learning with Graph Context Modeling for Sparse
Knowledge Graph Completion
- URL: http://arxiv.org/abs/2208.07622v1
- Date: Tue, 16 Aug 2022 09:17:40 GMT
- Title: KRACL: Contrastive Learning with Graph Context Modeling for Sparse
Knowledge Graph Completion
- Authors: Zhaoxuan Tan, Zilong Chen, Shangbin Feng, Qingyue Zhang, Qinghua
Zheng, Jundong Li, Minnan Luo
- Abstract summary: Knowledge Graph Embeddings (KGE) aim to map entities and relations to low dimensional spaces and have become the textitde-facto standard for knowledge graph completion.
Most existing KGE methods suffer from the sparsity challenge, where it is harder to predict entities that appear less frequently in knowledge graphs.
We propose a novel framework to alleviate the widespread sparsity in KGs with graph context and contrastive learning.
- Score: 37.92814873958519
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge Graph Embeddings (KGE) aim to map entities and relations to low
dimensional spaces and have become the \textit{de-facto} standard for knowledge
graph completion. Most existing KGE methods suffer from the sparsity challenge,
where it is harder to predict entities that appear less frequently in knowledge
graphs. In this work, we propose a novel framework KRACL to alleviate the
widespread sparsity in KGs with graph context and contrastive learning.
Firstly, we propose the Knowledge Relational Attention Network (KRAT) to
leverage the graph context by simultaneously projecting neighboring triples to
different latent spaces and jointly aggregating messages with the attention
mechanism. KRAT is capable of capturing the subtle semantic information and
importance of different context triples as well as leveraging multi-hop
information in knowledge graphs. Secondly, we propose the knowledge contrastive
loss by combining the contrastive loss with cross entropy loss, which
introduces more negative samples and thus enriches the feedback to sparse
entities. Our experiments demonstrate that KRACL achieves superior results
across various standard knowledge graph benchmarks, especially on WN18RR and
NELL-995 which have large numbers of low in-degree entities. Extensive
experiments also bear out KRACL's effectiveness in handling sparse knowledge
graphs and robustness against noisy triples.
Related papers
- CausE: Towards Causal Knowledge Graph Embedding [13.016173217017597]
Knowledge graph embedding (KGE) focuses on representing the entities and relations of a knowledge graph (KG) into the continuous vector spaces.
We build the new paradigm of KGE in the context of causality and embedding disentanglement.
We propose a Causality-enhanced knowledge graph Embedding (CausE) framework.
arXiv Detail & Related papers (2023-07-21T14:25:39Z) - Efficient Relation-aware Neighborhood Aggregation in Graph Neural Networks via Tensor Decomposition [4.041834517339835]
We propose a novel knowledge graph that incorporates tensor decomposition within the aggregation function of Graph Conalvolution Network (R-GCN)
Our model enhances the representation of neighboring entities by employing projection matrices of a low-rank tensor defined by relation types.
We adopt a training strategy inspired by contrastive learning to relieve the training limitation of the 1-k-k encoder method inherent in handling vast graphs.
arXiv Detail & Related papers (2022-12-11T19:07:34Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Simple and Effective Relation-based Embedding Propagation for Knowledge
Representation Learning [15.881121633396832]
We propose the Relation-based Embedding Propagation (REP) method to adapt pretrained graph embeddings with context.
We show that REP brings about 10% relative improvement to triplet-based embedding methods on OGBL-WikiKG2.
It takes 5%-83% time to achieve comparable results as the state-of-the-art GC-OTE.
arXiv Detail & Related papers (2022-05-13T06:02:13Z) - Knowledge Graph Contrastive Learning for Recommendation [32.918864602360884]
We design a general Knowledge Graph Contrastive Learning framework to alleviate the information noise for knowledge graph-enhanced recommender systems.
Specifically, we propose a knowledge graph augmentation schema to suppress KG noise in information aggregation.
We exploit additional supervision signals from the KG augmentation process to guide a cross-view contrastive learning paradigm.
arXiv Detail & Related papers (2022-05-02T15:24:53Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - KGE-CL: Contrastive Learning of Knowledge Graph Embeddings [64.67579344758214]
We propose a simple yet efficient contrastive learning framework for knowledge graph embeddings.
It can shorten the semantic distance of the related entities and entity-relation couples in different triples.
It can yield some new state-of-the-art results, achieving 51.2% MRR, 46.8% Hits@1 on the WN18RR dataset, and 59.1% MRR, 51.8% Hits@1 on the YAGO3-10 dataset.
arXiv Detail & Related papers (2021-12-09T12:45:33Z) - DisenKGAT: Knowledge Graph Embedding with Disentangled Graph Attention
Network [48.38954651216983]
We propose a novel Disentangled Knowledge Graph Attention Network (DisenKGAT) for Knowledge graphs.
DisenKGAT uses both micro-disentanglement and macro-disentanglement to exploit representations behind Knowledge graphs.
Our work has strong robustness and flexibility to adapt to various score functions.
arXiv Detail & Related papers (2021-08-22T04:10:35Z) - RelWalk A Latent Variable Model Approach to Knowledge Graph Embedding [50.010601631982425]
This paper extends the random walk model (Arora et al., 2016a) of word embeddings to Knowledge Graph Embeddings (KGEs)
We derive a scoring function that evaluates the strength of a relation R between two entities h (head) and t (tail)
We propose a learning objective motivated by the theoretical analysis to learn KGEs from a given knowledge graph.
arXiv Detail & Related papers (2021-01-25T13:31:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.