r-GAT: Relational Graph Attention Network for Multi-Relational Graphs
- URL: http://arxiv.org/abs/2109.05922v1
- Date: Mon, 13 Sep 2021 12:43:00 GMT
- Title: r-GAT: Relational Graph Attention Network for Multi-Relational Graphs
- Authors: Meiqi Chen, Yuan Zhang, Xiaoyu Kou, Yuntao Li, Yan Zhang
- Abstract summary: Graph Attention Network (GAT) focuses on modelling simple undirected and single relational graph data only.
We propose r-GAT, a relational graph attention network to learn multi-channel entity representations.
Experiments on link prediction and entity classification tasks show that our r-GAT can model multi-relational graphs effectively.
- Score: 8.529080554172692
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Attention Network (GAT) focuses on modelling simple undirected and
single relational graph data only. This limits its ability to deal with more
general and complex multi-relational graphs that contain entities with directed
links of different labels (e.g., knowledge graphs). Therefore, directly
applying GAT on multi-relational graphs leads to sub-optimal solutions. To
tackle this issue, we propose r-GAT, a relational graph attention network to
learn multi-channel entity representations. Specifically, each channel
corresponds to a latent semantic aspect of an entity. This enables us to
aggregate neighborhood information for the current aspect using relation
features. We further propose a query-aware attention mechanism for subsequent
tasks to select useful aspects. Extensive experiments on link prediction and
entity classification tasks show that our r-GAT can model multi-relational
graphs effectively. Also, we show the interpretability of our approach by case
study.
Related papers
- Relating-Up: Advancing Graph Neural Networks through Inter-Graph Relationships [17.978546172777342]
Graph Neural Networks (GNNs) have excelled in learning from graph-structured data.
Despite their successes, GNNs are limited by neglecting the context of relationships across graphs.
We introduce Relating-Up, a plug-and-play module that enhances GNNs by exploiting inter-graph relationships.
arXiv Detail & Related papers (2024-05-07T02:16:54Z) - MGNet: Learning Correspondences via Multiple Graphs [78.0117352211091]
Learning correspondences aims to find correct correspondences from the initial correspondence set with an uneven correspondence distribution and a low inlier rate.
Recent advances usually use graph neural networks (GNNs) to build a single type of graph or stack local graphs into the global one to complete the task.
We propose MGNet to effectively combine multiple complementary graphs.
arXiv Detail & Related papers (2024-01-10T07:58:44Z) - Learnable Graph Matching: A Practical Paradigm for Data Association [74.28753343714858]
We propose a general learnable graph matching method to address these issues.
Our method achieves state-of-the-art performance on several MOT datasets.
For image matching, our method outperforms state-of-the-art methods on a popular indoor dataset, ScanNet.
arXiv Detail & Related papers (2023-03-27T17:39:00Z) - LUKE-Graph: A Transformer-based Approach with Gated Relational Graph
Attention for Cloze-style Reading Comprehension [13.173307471333619]
We propose the LUKE-Graph, a model that builds a heterogeneous graph based on the intuitive relationships between entities in a document.
We then use the Attention reading (RGAT) to fuse the graph's reasoning information and the contextual representation encoded by the pre-trained LUKE model.
Experimental results demonstrate that the LUKE-Graph achieves state-of-the-art performance with commonsense reasoning.
arXiv Detail & Related papers (2023-03-12T14:31:44Z) - You Only Transfer What You Share: Intersection-Induced Graph Transfer
Learning for Link Prediction [79.15394378571132]
We investigate a previously overlooked phenomenon: in many cases, a densely connected, complementary graph can be found for the original graph.
The denser graph may share nodes with the original graph, which offers a natural bridge for transferring selective, meaningful knowledge.
We identify this setting as Graph Intersection-induced Transfer Learning (GITL), which is motivated by practical applications in e-commerce or academic co-authorship predictions.
arXiv Detail & Related papers (2023-02-27T22:56:06Z) - Towards Consistency and Complementarity: A Multiview Graph Information
Bottleneck Approach [25.40829979251883]
How to model and integrate shared (i.e. consistency) and view-specific (i.e. complementarity) information is a key issue in multiview graph analysis.
We propose a novel Multiview Variational Graph Information Bottleneck (MVGIB) principle to maximize the agreement for common representations and the disagreement for view-specific representations.
arXiv Detail & Related papers (2022-10-11T13:51:34Z) - Joint Graph Learning and Matching for Semantic Feature Correspondence [69.71998282148762]
We propose a joint emphgraph learning and matching network, named GLAM, to explore reliable graph structures for boosting graph matching.
The proposed method is evaluated on three popular visual matching benchmarks (Pascal VOC, Willow Object and SPair-71k)
It outperforms previous state-of-the-art graph matching methods by significant margins on all benchmarks.
arXiv Detail & Related papers (2021-09-01T08:24:02Z) - Inter-domain Multi-relational Link Prediction [19.094154079752123]
When related graphs coexist, it is of great benefit to build a larger graph via integrating the smaller ones.
The integration requires predicting hidden relational connections between entities belonged to different graphs.
We propose a new approach to tackle the inter-domain link prediction problem by softly aligning the entity distributions between different domains.
arXiv Detail & Related papers (2021-06-11T05:10:31Z) - Link Prediction on N-ary Relational Facts: A Graph-based Approach [18.01071110085996]
Link prediction on knowledge graphs (KGs) is a key research topic.
This paper considers link prediction upon n-ary relational facts and proposes a graph-based approach to this task.
arXiv Detail & Related papers (2021-05-18T12:40:35Z) - Jointly Cross- and Self-Modal Graph Attention Network for Query-Based
Moment Localization [77.21951145754065]
We propose a novel Cross- and Self-Modal Graph Attention Network (CSMGAN) that recasts this task as a process of iterative messages passing over a joint graph.
Our CSMGAN is able to effectively capture high-order interactions between two modalities, thus enabling a further precise localization.
arXiv Detail & Related papers (2020-08-04T08:25:24Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.