Simple and Effective Relation-based Embedding Propagation for Knowledge
Representation Learning
- URL: http://arxiv.org/abs/2205.06456v1
- Date: Fri, 13 May 2022 06:02:13 GMT
- Title: Simple and Effective Relation-based Embedding Propagation for Knowledge
Representation Learning
- Authors: Huijuan Wang, Siming Dai, Weiyue Su, Hui Zhong, Zeyang Fang, Zhengjie
Huang, Shikun Feng, Zeyu Chen, Yu Sun, Dianhai Yu
- Abstract summary: We propose the Relation-based Embedding Propagation (REP) method to adapt pretrained graph embeddings with context.
We show that REP brings about 10% relative improvement to triplet-based embedding methods on OGBL-WikiKG2.
It takes 5%-83% time to achieve comparable results as the state-of-the-art GC-OTE.
- Score: 15.881121633396832
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Relational graph neural networks have garnered particular attention to encode
graph context in knowledge graphs (KGs). Although they achieved competitive
performance on small KGs, how to efficiently and effectively utilize graph
context for large KGs remains an open problem. To this end, we propose the
Relation-based Embedding Propagation (REP) method. It is a post-processing
technique to adapt pre-trained KG embeddings with graph context. As relations
in KGs are directional, we model the incoming head context and the outgoing
tail context separately. Accordingly, we design relational context functions
with no external parameters. Besides, we use averaging to aggregate context
information, making REP more computation-efficient. We theoretically prove that
such designs can avoid information distortion during propagation. Extensive
experiments also demonstrate that REP has significant scalability while
improving or maintaining prediction quality. Notably, it averagely brings about
10% relative improvement to triplet-based embedding methods on OGBL-WikiKG2 and
takes 5%-83% time to achieve comparable results as the state-of-the-art GC-OTE.
Related papers
- Graph Context Transformation Learning for Progressive Correspondence
Pruning [26.400567961735234]
We propose Graph Context Transformation Network (GCT-Net) enhancing context information to conduct consensus guidance for progressive correspondence pruning.
Specifically, we design the Graph Context Enhance Transformer which first generates the graph network and then transforms it into multi-branch graph contexts.
To further apply the recalibrated graph contexts to the global domain, we propose the Graph Context Guidance Transformer.
arXiv Detail & Related papers (2023-12-26T09:43:30Z) - Normalizing Flow-based Neural Process for Few-Shot Knowledge Graph
Completion [69.55700751102376]
Few-shot knowledge graph completion (FKGC) aims to predict missing facts for unseen relations with few-shot associated facts.
Existing FKGC methods are based on metric learning or meta-learning, which often suffer from the out-of-distribution and overfitting problems.
In this paper, we propose a normalizing flow-based neural process for few-shot knowledge graph completion (NP-FKGC)
arXiv Detail & Related papers (2023-04-17T11:42:28Z) - Efficient Relation-aware Neighborhood Aggregation in Graph Neural Networks via Tensor Decomposition [4.041834517339835]
We propose a novel knowledge graph that incorporates tensor decomposition within the aggregation function of Graph Conalvolution Network (R-GCN)
Our model enhances the representation of neighboring entities by employing projection matrices of a low-rank tensor defined by relation types.
We adopt a training strategy inspired by contrastive learning to relieve the training limitation of the 1-k-k encoder method inherent in handling vast graphs.
arXiv Detail & Related papers (2022-12-11T19:07:34Z) - KRACL: Contrastive Learning with Graph Context Modeling for Sparse
Knowledge Graph Completion [37.92814873958519]
Knowledge Graph Embeddings (KGE) aim to map entities and relations to low dimensional spaces and have become the textitde-facto standard for knowledge graph completion.
Most existing KGE methods suffer from the sparsity challenge, where it is harder to predict entities that appear less frequently in knowledge graphs.
We propose a novel framework to alleviate the widespread sparsity in KGs with graph context and contrastive learning.
arXiv Detail & Related papers (2022-08-16T09:17:40Z) - Explainable Sparse Knowledge Graph Completion via High-order Graph
Reasoning Network [111.67744771462873]
This paper proposes a novel explainable model for sparse Knowledge Graphs (KGs)
It combines high-order reasoning into a graph convolutional network, namely HoGRN.
It can not only improve the generalization ability to mitigate the information insufficiency issue but also provide interpretability.
arXiv Detail & Related papers (2022-07-14T10:16:56Z) - Data-heterogeneity-aware Mixing for Decentralized Learning [63.83913592085953]
We characterize the dependence of convergence on the relationship between the mixing weights of the graph and the data heterogeneity across nodes.
We propose a metric that quantifies the ability of a graph to mix the current gradients.
Motivated by our analysis, we propose an approach that periodically and efficiently optimize the metric.
arXiv Detail & Related papers (2022-04-13T15:54:35Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - RelWalk A Latent Variable Model Approach to Knowledge Graph Embedding [50.010601631982425]
This paper extends the random walk model (Arora et al., 2016a) of word embeddings to Knowledge Graph Embeddings (KGEs)
We derive a scoring function that evaluates the strength of a relation R between two entities h (head) and t (tail)
We propose a learning objective motivated by the theoretical analysis to learn KGEs from a given knowledge graph.
arXiv Detail & Related papers (2021-01-25T13:31:29Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.