Learning Rule-Induced Subgraph Representations for Inductive Relation Prediction
- URL: http://arxiv.org/abs/2408.07088v2
- Date: Tue, 20 Aug 2024 06:33:40 GMT
- Title: Learning Rule-Induced Subgraph Representations for Inductive Relation Prediction
- Authors: Tianyu Liu, Qitan Lv, Jie Wang, Shuling Yang, Hanzhu Chen,
- Abstract summary: We propose a novel textitsingle-source edge-wise GNN model to learn the textbfRule-inductextbfEd textbfSubgraph representextbfTations.
REST is a simple and effective approach with theoretical support to learn the textitrule-induced subgraph representation.
- Score: 7.265921857903076
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Inductive relation prediction (IRP) -- where entities can be different during training and inference -- has shown great power for completing evolving knowledge graphs. Existing works mainly focus on using graph neural networks (GNNs) to learn the representation of the subgraph induced from the target link, which can be seen as an implicit rule-mining process to measure the plausibility of the target link. However, these methods cannot differentiate the target link and other links during message passing, hence the final subgraph representation will contain irrelevant rule information to the target link, which reduces the reasoning performance and severely hinders the applications for real-world scenarios. To tackle this problem, we propose a novel \textit{single-source edge-wise} GNN model to learn the \textbf{R}ule-induc\textbf{E}d \textbf{S}ubgraph represen\textbf{T}ations (\textbf{REST}), which encodes relevant rules and eliminates irrelevant rules within the subgraph. Specifically, we propose a \textit{single-source} initialization approach to initialize edge features only for the target link, which guarantees the relevance of mined rules and target link. Then we propose several RNN-based functions for \textit{edge-wise} message passing to model the sequential property of mined rules. REST is a simple and effective approach with theoretical support to learn the \textit{rule-induced subgraph representation}. Moreover, REST does not need node labeling, which significantly accelerates the subgraph preprocessing time by up to \textbf{11.66$\times$}. Experiments on inductive relation prediction benchmarks demonstrate the effectiveness of our REST. Our code is available at https://github.com/smart-lty/REST.
Related papers
- Inductive Link Prediction on N-ary Relational Facts via Semantic Hypergraph Reasoning [13.74104688000986]
We propose an n-ary subgraph reasoning framework for fully inductive link prediction (ILP) on n-ary relational facts.<n>Specifically, we introduce a novel graph structure, the n-ary semantic hypergraph, to facilitate subgraph extraction.<n>We also develop a subgraph aggregating network, NS-HART, to effectively mine complex semantic correlations within subgraphs.
arXiv Detail & Related papers (2025-03-26T16:09:54Z) - Enhanced Expressivity in Graph Neural Networks with Lanczos-Based Linear Constraints [7.605749412696919]
Graph Neural Networks (GNNs) excel in handling graph-structured data but often underperform in link prediction tasks.
We present a novel method to enhance the expressivity of GNNs by embedding induced subgraphs into the graph Laplacian matrix's eigenbasis.
Our method achieves 20x and 10x speedup by only requiring 5% and 10% data from the PubMed and OGBL-Vessel datasets.
arXiv Detail & Related papers (2024-08-22T12:22:00Z) - Deep Generative Models for Subgraph Prediction [10.56335881963895]
This paper introduces subgraph queries as a new task for deep graph learning.
Subgraph queries jointly predict the components of a target subgraph based on evidence that is represented by an observed subgraph.
We utilize a probabilistic deep Graph Generative Model to answer subgraph queries.
arXiv Detail & Related papers (2024-08-07T19:24:02Z) - Less is More: One-shot Subgraph Reasoning on Large-scale Knowledge Graphs [49.547988001231424]
We propose the one-shot-subgraph link prediction to achieve efficient and adaptive prediction.<n>Design principle is that, instead of directly acting on the whole KG, the prediction procedure is decoupled into two steps.<n>We achieve promoted efficiency and leading performances on five large-scale benchmarks.
arXiv Detail & Related papers (2024-03-15T12:00:12Z) - PipeNet: Question Answering with Semantic Pruning over Knowledge Graphs [56.5262495514563]
We propose a grounding-pruning-reasoning pipeline to prune noisy computation nodes.
We also propose a graph attention network (GAT) based module to reason with the subgraph data.
arXiv Detail & Related papers (2024-01-31T01:37:33Z) - Efficient Link Prediction via GNN Layers Induced by Negative Sampling [92.05291395292537]
Graph neural networks (GNNs) for link prediction can loosely be divided into two broad categories.
First, emphnode-wise architectures pre-compute individual embeddings for each node that are later combined by a simple decoder to make predictions.
Second, emphedge-wise methods rely on the formation of edge-specific subgraph embeddings to enrich the representation of pair-wise relationships.
arXiv Detail & Related papers (2023-10-14T07:02:54Z) - ConGraT: Self-Supervised Contrastive Pretraining for Joint Graph and Text Embeddings [20.25180279903009]
We propose Contrastive Graph-Text pretraining (ConGraT) for jointly learning separate representations of texts and nodes in a text-attributed graph (TAG)
Our method trains a language model (LM) and a graph neural network (GNN) to align their representations in a common latent space using a batch-wise contrastive learning objective inspired by CLIP.
Experiments demonstrate that ConGraT outperforms baselines on various downstream tasks, including node and text category classification, link prediction, and language modeling.
arXiv Detail & Related papers (2023-05-23T17:53:30Z) - Rethinking Explaining Graph Neural Networks via Non-parametric Subgraph
Matching [68.35685422301613]
We propose a novel non-parametric subgraph matching framework, dubbed MatchExplainer, to explore explanatory subgraphs.
It couples the target graph with other counterpart instances and identifies the most crucial joint substructure by minimizing the node corresponding-based distance.
Experiments on synthetic and real-world datasets show the effectiveness of our MatchExplainer by outperforming all state-of-the-art parametric baselines with significant margins.
arXiv Detail & Related papers (2023-01-07T05:14:45Z) - Graph Neural Networks for Link Prediction with Subgraph Sketching [15.808095529382138]
We propose a novel full-graph GNN called ELPH (Efficient Link Prediction with Hashing)
It passes subgraph sketches as messages to approximate the key components of SGNNs without explicit subgraph construction.
It outperforms existing SGNN models on many standard LP benchmarks while being orders of magnitude faster.
arXiv Detail & Related papers (2022-09-30T14:20:07Z) - Personalized Subgraph Federated Learning [56.52903162729729]
We introduce a new subgraph FL problem, personalized subgraph FL, which focuses on the joint improvement of the interrelated local GNNs.
We propose a novel framework, FEDerated Personalized sUBgraph learning (FED-PUB), to tackle it.
We validate our FED-PUB for its subgraph FL performance on six datasets, considering both non-overlapping and overlapping subgraphs.
arXiv Detail & Related papers (2022-06-21T09:02:53Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - InsertGNN: Can Graph Neural Networks Outperform Humans in TOEFL Sentence
Insertion Problem? [66.70154236519186]
Sentence insertion is a delicate but fundamental NLP problem.
Current approaches in sentence ordering, text coherence, and question answering (QA) are neither suitable nor good at solving it.
We propose InsertGNN, a model that represents the problem as a graph and adopts the graph Neural Network (GNN) to learn the connection between sentences.
arXiv Detail & Related papers (2021-03-28T06:50:31Z) - Towards Efficient Scene Understanding via Squeeze Reasoning [71.1139549949694]
We propose a novel framework called Squeeze Reasoning.
Instead of propagating information on the spatial map, we first learn to squeeze the input feature into a channel-wise global vector.
We show that our approach can be modularized as an end-to-end trained block and can be easily plugged into existing networks.
arXiv Detail & Related papers (2020-11-06T12:17:01Z) - Combining Label Propagation and Simple Models Out-performs Graph Neural
Networks [52.121819834353865]
We show that for many standard transductive node classification benchmarks, we can exceed or match the performance of state-of-the-art GNNs.
We call this overall procedure Correct and Smooth (C&S)
Our approach exceeds or nearly matches the performance of state-of-the-art GNNs on a wide variety of benchmarks.
arXiv Detail & Related papers (2020-10-27T02:10:52Z) - NENET: An Edge Learnable Network for Link Prediction in Scene Text [1.815512110340033]
We propose a novel Graph Neural Network (GNN) architecture that allows us to learn both node and edge features.
We show our concept on the well known SynthText dataset, achieving top results as compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-05-25T14:47:16Z) - Gossip and Attend: Context-Sensitive Graph Representation Learning [0.5493410630077189]
Graph representation learning (GRL) is a powerful technique for learning low-dimensional vector representation of high-dimensional and often sparse graphs.
We propose GOAT, a context-sensitive algorithm inspired by gossip communication and a mutual attention mechanism simply over the structure of the graph.
arXiv Detail & Related papers (2020-03-30T18:23:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.