Explainable Sparse Knowledge Graph Completion via High-order Graph
Reasoning Network
- URL: http://arxiv.org/abs/2207.07503v1
- Date: Thu, 14 Jul 2022 10:16:56 GMT
- Title: Explainable Sparse Knowledge Graph Completion via High-order Graph
Reasoning Network
- Authors: Weijian Chen, Yixin Cao, Fuli Feng, Xiangnan He, Yongdong Zhang
- Abstract summary: This paper proposes a novel explainable model for sparse Knowledge Graphs (KGs)
It combines high-order reasoning into a graph convolutional network, namely HoGRN.
It can not only improve the generalization ability to mitigate the information insufficiency issue but also provide interpretability.
- Score: 111.67744771462873
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge Graphs (KGs) are becoming increasingly essential infrastructures in
many applications while suffering from incompleteness issues. The KG completion
task (KGC) automatically predicts missing facts based on an incomplete KG.
However, existing methods perform unsatisfactorily in real-world scenarios. On
the one hand, their performance will dramatically degrade along with the
increasing sparsity of KGs. On the other hand, the inference procedure for
prediction is an untrustworthy black box.
This paper proposes a novel explainable model for sparse KGC, compositing
high-order reasoning into a graph convolutional network, namely HoGRN. It can
not only improve the generalization ability to mitigate the information
insufficiency issue but also provide interpretability while maintaining the
model's effectiveness and efficiency. There are two main components that are
seamlessly integrated for joint optimization. First, the high-order reasoning
component learns high-quality relation representations by capturing endogenous
correlation among relations. This can reflect logical rules to justify a
broader of missing facts. Second, the entity updating component leverages a
weight-free Graph Convolutional Network (GCN) to efficiently model KG
structures with interpretability. Unlike conventional methods, we conduct
entity aggregation and design composition-based attention in the relational
space without additional parameters. The lightweight design makes HoGRN better
suitable for sparse settings. For evaluation, we have conducted extensive
experiments-the results of HoGRN on several sparse KGs present impressive
improvements (9% MRR gain on average). Further ablation and case studies
demonstrate the effectiveness of the main components. Our codes will be
released upon acceptance.
Related papers
- EntailE: Introducing Textual Entailment in Commonsense Knowledge Graph
Completion [54.12709176438264]
Commonsense knowledge graphs (CSKGs) utilize free-form text to represent named entities, short phrases, and events as their nodes.
Current methods leverage semantic similarities to increase the graph density, but the semantic plausibility of the nodes and their relations are under-explored.
We propose to adopt textual entailment to find implicit entailment relations between CSKG nodes, to effectively densify the subgraph connecting nodes within the same conceptual class.
arXiv Detail & Related papers (2024-02-15T02:27:23Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - Exploring & Exploiting High-Order Graph Structure for Sparse Knowledge
Graph Completion [20.45256490854869]
We present a novel framework, LR-GCN, that is able to automatically capture valuable long-range dependency among entities.
The proposed approach comprises two main components: a GNN-based predictor and a reasoning path distiller.
arXiv Detail & Related papers (2023-06-29T15:35:34Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Normalizing Flow-based Neural Process for Few-Shot Knowledge Graph
Completion [69.55700751102376]
Few-shot knowledge graph completion (FKGC) aims to predict missing facts for unseen relations with few-shot associated facts.
Existing FKGC methods are based on metric learning or meta-learning, which often suffer from the out-of-distribution and overfitting problems.
In this paper, we propose a normalizing flow-based neural process for few-shot knowledge graph completion (NP-FKGC)
arXiv Detail & Related papers (2023-04-17T11:42:28Z) - GreenKGC: A Lightweight Knowledge Graph Completion Method [32.528770408502396]
GreenKGC aims to discover missing relationships between entities in knowledge graphs.
It consists of three modules: representation learning, feature pruning, and decision learning.
In low dimensions, GreenKGC can outperform SOTA methods in most datasets.
arXiv Detail & Related papers (2022-08-19T03:33:45Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Simple and Effective Relation-based Embedding Propagation for Knowledge
Representation Learning [15.881121633396832]
We propose the Relation-based Embedding Propagation (REP) method to adapt pretrained graph embeddings with context.
We show that REP brings about 10% relative improvement to triplet-based embedding methods on OGBL-WikiKG2.
It takes 5%-83% time to achieve comparable results as the state-of-the-art GC-OTE.
arXiv Detail & Related papers (2022-05-13T06:02:13Z) - Link-Intensive Alignment for Incomplete Knowledge Graphs [28.213397255810936]
In this work, we address the problem of aligning incomplete KGs with representation learning.
Our framework exploits two feature channels: transitivity-based and proximity-based.
The two feature channels are jointly learned to exchange important features between the input KGs.
Also, we develop a missing links detector that discovers and recovers the missing links during the training process.
arXiv Detail & Related papers (2021-12-17T00:41:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.