River of No Return: Graph Percolation Embeddings for Efficient Knowledge
Graph Reasoning
- URL: http://arxiv.org/abs/2305.09974v1
- Date: Wed, 17 May 2023 06:13:28 GMT
- Title: River of No Return: Graph Percolation Embeddings for Efficient Knowledge
Graph Reasoning
- Authors: Kai Wang and Siqiang Luo and Dan Lin
- Abstract summary: We study Graph Neural Networks (GNNs)-based embedding techniques for knowledge graph (KG) reasoning.
For the first time, we link the path redundancy issue in the state-of-the-art KG reasoning models based on path encoding and message passing to the transformation error in model training.
We propose an efficient Graph Percolation Process motivated by the percolation model in Fluid Mechanics, and design a lightweight GNN-based KG reasoning framework called Graph Percolation Embeddings (GraPE)
- Score: 16.143817898963785
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study Graph Neural Networks (GNNs)-based embedding techniques for
knowledge graph (KG) reasoning. For the first time, we link the path redundancy
issue in the state-of-the-art KG reasoning models based on path encoding and
message passing to the transformation error in model training, which brings us
new theoretical insights into KG reasoning, as well as high efficacy in
practice. On the theoretical side, we analyze the entropy of transformation
error in KG paths and point out query-specific redundant paths causing entropy
increases. These findings guide us to maintain the shortest paths and remove
redundant paths for minimized-entropy message passing. To achieve this goal, on
the practical side, we propose an efficient Graph Percolation Process motivated
by the percolation model in Fluid Mechanics, and design a lightweight GNN-based
KG reasoning framework called Graph Percolation Embeddings (GraPE). GraPE
outperforms previous state-of-the-art methods in both transductive and
inductive reasoning tasks while requiring fewer training parameters and less
inference time.
Related papers
- Towards Faster Graph Partitioning via Pre-training and Inductive Inference [7.962080448174845]
We propose PR-GPT (Pre-trained & Refined Graph ParTitioning) based on a novel pre-training & refinement paradigm.
PR-GPT can ensure faster GP on large-scale graphs without significant quality degradation, compared with running a refinement method from scratch.
arXiv Detail & Related papers (2024-09-01T09:11:34Z) - Do We Really Need Graph Convolution During Training? Light Post-Training Graph-ODE for Efficient Recommendation [34.93725892725111]
Graph convolution networks (GCNs) in training recommender systems (RecSys) have been persistent concerns.
This paper presents a critical examination of the necessity of graph convolutions during the training phase.
We introduce an innovative alternative: the Light Post-Training Graph Ordinary-Differential-Equation (LightGODE)
arXiv Detail & Related papers (2024-07-26T17:59:32Z) - Revealing Decurve Flows for Generalized Graph Propagation [108.80758541147418]
This study addresses the limitations of the traditional analysis of message-passing, central to graph learning, by defining em textbfgeneralized propagation with directed and weighted graphs.
We include a preliminary exploration of learned propagation patterns in datasets, a first in the field.
arXiv Detail & Related papers (2024-02-13T14:13:17Z) - Rethinking Dimensional Rationale in Graph Contrastive Learning from Causal Perspective [15.162584339143239]
Graph contrastive learning is a general learning paradigm excelling at capturing invariant information from diverse perturbations in graphs.
Recent works focus on exploring the structural rationale from graphs, thereby increasing the discriminability of the invariant information.
We propose the dimensional rationale-aware graph contrastive learning approach, which introduces a learnable dimensional rationale acquiring network and a redundancy reduction constraint.
arXiv Detail & Related papers (2023-12-16T10:05:18Z) - Graph Condensation for Inductive Node Representation Learning [59.76374128436873]
We propose mapping-aware graph condensation (MCond)
MCond integrates new nodes into the synthetic graph for inductive representation learning.
On the Reddit dataset, MCond achieves up to 121.5x inference speedup and 55.9x reduction in storage requirements.
arXiv Detail & Related papers (2023-07-29T12:11:14Z) - Exploring & Exploiting High-Order Graph Structure for Sparse Knowledge
Graph Completion [20.45256490854869]
We present a novel framework, LR-GCN, that is able to automatically capture valuable long-range dependency among entities.
The proposed approach comprises two main components: a GNN-based predictor and a reasoning path distiller.
arXiv Detail & Related papers (2023-06-29T15:35:34Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Generalization Guarantee of Training Graph Convolutional Networks with
Graph Topology Sampling [83.77955213766896]
Graph convolutional networks (GCNs) have recently achieved great empirical success in learning graphstructured data.
To address its scalability issue, graph topology sampling has been proposed to reduce the memory and computational cost of training Gs.
This paper provides first theoretical justification of graph topology sampling in training (up to) three-layer GCNs.
arXiv Detail & Related papers (2022-07-07T21:25:55Z) - RelWalk A Latent Variable Model Approach to Knowledge Graph Embedding [50.010601631982425]
This paper extends the random walk model (Arora et al., 2016a) of word embeddings to Knowledge Graph Embeddings (KGEs)
We derive a scoring function that evaluates the strength of a relation R between two entities h (head) and t (tail)
We propose a learning objective motivated by the theoretical analysis to learn KGEs from a given knowledge graph.
arXiv Detail & Related papers (2021-01-25T13:31:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.