Message Intercommunication for Inductive Relation Reasoning
- URL: http://arxiv.org/abs/2305.14074v1
- Date: Tue, 23 May 2023 13:51:46 GMT
- Title: Message Intercommunication for Inductive Relation Reasoning
- Authors: Ke Liang, Lingyuan Meng, Sihang Zhou, Siwei Wang, Wenxuan Tu, Yue Liu,
Meng Liu, Xinwang Liu
- Abstract summary: We develop a novel inductive relation reasoning model called MINES.
We introduce a Message Intercommunication mechanism on the Neighbor-Enhanced Subgraph.
Our experiments show that MINES outperforms existing state-of-the-art models.
- Score: 49.731293143079455
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inductive relation reasoning for knowledge graphs, aiming to infer missing
links between brand-new entities, has drawn increasing attention. The models
developed based on Graph Inductive Learning, called GraIL-based models, have
shown promising potential for this task. However, the uni-directional
message-passing mechanism hinders such models from exploiting hidden mutual
relations between entities in directed graphs. Besides, the enclosing subgraph
extraction in most GraIL-based models restricts the model from extracting
enough discriminative information for reasoning. Consequently, the expressive
ability of these models is limited. To address the problems, we propose a novel
GraIL-based inductive relation reasoning model, termed MINES, by introducing a
Message Intercommunication mechanism on the Neighbor-Enhanced Subgraph.
Concretely, the message intercommunication mechanism is designed to capture the
omitted hidden mutual information. It introduces bi-directed information
interactions between connected entities by inserting an undirected/bi-directed
GCN layer between uni-directed RGCN layers. Moreover, inspired by the success
of involving more neighbors in other graph-based tasks, we extend the
neighborhood area beyond the enclosing subgraph to enhance the information
collection for inductive relation reasoning. Extensive experiments on twelve
inductive benchmark datasets demonstrate that our MINES outperforms existing
state-of-the-art models, and show the effectiveness of our intercommunication
mechanism and reasoning on the neighbor-enhanced subgraph.
Related papers
- Graph-Augmented Relation Extraction Model with LLMs-Generated Support Document [7.0421339410165045]
This study introduces a novel approach to sentence-level relation extraction (RE)
It integrates Graph Neural Networks (GNNs) with Large Language Models (LLMs) to generate contextually enriched support documents.
Our experiments, conducted on the CrossRE dataset, demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-10-30T20:48:34Z) - Introducing Diminutive Causal Structure into Graph Representation Learning [19.132025125620274]
We introduce a novel method that enables Graph Neural Networks (GNNs) to glean insights from specialized diminutive causal structures.
Our method specifically extracts causal knowledge from the model representation of these diminutive causal structures.
arXiv Detail & Related papers (2024-06-13T00:18:20Z) - Revealing Decurve Flows for Generalized Graph Propagation [108.80758541147418]
This study addresses the limitations of the traditional analysis of message-passing, central to graph learning, by defining em textbfgeneralized propagation with directed and weighted graphs.
We include a preliminary exploration of learned propagation patterns in datasets, a first in the field.
arXiv Detail & Related papers (2024-02-13T14:13:17Z) - Learning Complete Topology-Aware Correlations Between Relations for Inductive Link Prediction [121.65152276851619]
We show that semantic correlations between relations are inherently edge-level and entity-independent.
We propose a novel subgraph-based method, namely TACO, to model Topology-Aware COrrelations between relations.
To further exploit the potential of RCN, we propose Complete Common Neighbor induced subgraph.
arXiv Detail & Related papers (2023-09-20T08:11:58Z) - Extending Transductive Knowledge Graph Embedding Models for Inductive
Logical Relational Inference [0.5439020425819]
This work bridges the gap between traditional transductive knowledge graph embedding approaches and more recent inductive relation prediction models.
We introduce a generalized form of harmonic extension which leverages representations learned through transductive embedding methods to infer representations of new entities introduced at inference time as in the inductive setting.
In experiments on a number of large-scale knowledge graph embedding benchmarks, we find that this approach for extending the functionality of transductive knowledge graph embedding models is competitive with--and in some scenarios outperforms--several state-of-the-art models derived explicitly for such inductive tasks.
arXiv Detail & Related papers (2023-09-07T15:24:18Z) - Knowledge Graph Completion with Counterfactual Augmentation [23.20561746976504]
We introduce a counterfactual question: "would the relation still exist if the neighborhood of entities became different from observation?"
With a carefully designed instantiation of a causal model on the knowledge graph, we generate the counterfactual relations to answer the question.
We incorporate the created counterfactual relations with the GNN-based framework on KGs to augment their learning of entity pair representations.
arXiv Detail & Related papers (2023-02-25T14:08:15Z) - Causally-guided Regularization of Graph Attention Improves
Generalizability [69.09877209676266]
We introduce CAR, a general-purpose regularization framework for graph attention networks.
Methodname aligns the attention mechanism with the causal effects of active interventions on graph connectivity.
For social media network-sized graphs, a CAR-guided graph rewiring approach could allow us to combine the scalability of graph convolutional methods with the higher performance of graph attention.
arXiv Detail & Related papers (2022-10-20T01:29:10Z) - Entity-Conditioned Question Generation for Robust Attention Distribution
in Neural Information Retrieval [51.53892300802014]
We show that supervised neural information retrieval models are prone to learning sparse attention patterns over passage tokens.
Using a novel targeted synthetic data generation method, we teach neural IR to attend more uniformly and robustly to all entities in a given passage.
arXiv Detail & Related papers (2022-04-24T22:36:48Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.