Representation Learning on Heterophilic Graph with Directional
Neighborhood Attention
- URL: http://arxiv.org/abs/2403.01475v1
- Date: Sun, 3 Mar 2024 10:59:16 GMT
- Title: Representation Learning on Heterophilic Graph with Directional
Neighborhood Attention
- Authors: Qincheng Lu, Jiaqi Zhu, Sitao Luan, Xiao-Wen Chang
- Abstract summary: Graph Attention Network (GAT) is one of the most popular Graph Neural Network (GNN) architecture.
GAT lacks the ability to capture long-range and global graph information, leading to unsatisfactory performance on some datasets.
We propose Directional Graph Attention Network (DGAT) to combine the feature-based attention with the global directional information extracted from the graph topology.
- Score: 8.493802098034255
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Attention Network (GAT) is one of the most popular Graph Neural Network
(GNN) architecture, which employs the attention mechanism to learn edge weights
and has demonstrated promising performance in various applications. However,
since it only incorporates information from immediate neighborhood, it lacks
the ability to capture long-range and global graph information, leading to
unsatisfactory performance on some datasets, particularly on heterophilic
graphs. To address this limitation, we propose the Directional Graph Attention
Network (DGAT) in this paper. DGAT is able to combine the feature-based
attention with the global directional information extracted from the graph
topology. To this end, a new class of Laplacian matrices is proposed which can
provably reduce the diffusion distance between nodes. Based on the new
Laplacian, topology-guided neighbour pruning and edge adding mechanisms are
proposed to remove the noisy and capture the helpful long-range neighborhood
information. Besides, a global directional attention is designed to enable a
topological-aware information propagation. The superiority of the proposed DGAT
over the baseline GAT has also been verified through experiments on real-world
benchmarks and synthetic data sets. It also outperforms the state-of-the-art
(SOTA) models on 6 out of 7 real-world benchmark datasets.
Related papers
- Neighbor Overlay-Induced Graph Attention Network [5.792501481702088]
Graph neural networks (GNNs) have garnered significant attention due to their ability to represent graph data.
This study proposes a neighbor overlay-induced graph attention network (NO-GAT) with the following two-fold ideas.
Empirical studies on graph benchmark datasets indicate that the proposed NO-GAT consistently outperforms state-of-the-art models.
arXiv Detail & Related papers (2024-08-16T15:01:28Z) - Learning Topological Representations with Bidirectional Graph Attention Network for Solving Job Shop Scheduling Problem [27.904195034688257]
Existing learning-based methods for solving job shop scheduling problems (JSSP) usually use off-the-shelf GNN models tailored to undirected graphs and neglect the rich and meaningful topological structures of disjunctive graphs (DGs)
This paper proposes the topology-aware bidirectional graph attention network (TBGAT) to embed the DG for solving JSSP in a local search framework.
arXiv Detail & Related papers (2024-02-27T15:33:20Z) - Graph Transformer GANs with Graph Masked Modeling for Architectural
Layout Generation [153.92387500677023]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The proposed graph Transformer encoder combines graph convolutions and self-attentions in a Transformer to model both local and global interactions.
We also propose a novel self-guided pre-training method for graph representation learning.
arXiv Detail & Related papers (2024-01-15T14:36:38Z) - Learning Strong Graph Neural Networks with Weak Information [64.64996100343602]
We develop a principled approach to the problem of graph learning with weak information (GLWI)
We propose D$2$PT, a dual-channel GNN framework that performs long-range information propagation on the input graph with incomplete structure, but also on a global graph that encodes global semantic similarities.
arXiv Detail & Related papers (2023-05-29T04:51:09Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Topological Relational Learning on Graphs [2.4692806302088868]
Graph neural networks (GNNs) have emerged as a powerful tool for graph classification and representation learning.
We propose a novel topological relational inference (TRI) which allows for integrating higher-order graph information to GNNs.
We show that the new TRI-GNN outperforms all 14 state-of-the-art baselines on 6 out 7 graphs and exhibit higher robustness to perturbations.
arXiv Detail & Related papers (2021-10-29T04:03:27Z) - Graph Networks with Spectral Message Passing [1.0742675209112622]
We introduce the Spectral Graph Network, which applies message passing to both the spatial and spectral domains.
Our results show that the Spectral GN promotes efficient training, reaching high performance with fewer training iterations despite having more parameters.
arXiv Detail & Related papers (2020-12-31T21:33:17Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - Graph Representation Learning Network via Adaptive Sampling [4.996520403438455]
Graph Attention Network (GAT) and GraphSAGE are neural network architectures that operate on graph-structured data.
One challenge raised by GraphSAGE is how to smartly combine neighbour features based on graph structure.
We propose a new architecture to address these issues that is more efficient and is capable of incorporating different edge type information.
arXiv Detail & Related papers (2020-06-08T14:36:20Z) - Graph Highway Networks [77.38665506495553]
Graph Convolution Networks (GCN) are widely used in learning graph representations due to their effectiveness and efficiency.
They suffer from the notorious over-smoothing problem, in which the learned representations converge to alike vectors when many layers are stacked.
We propose Graph Highway Networks (GHNet) which utilize gating units to balance the trade-off between homogeneity and heterogeneity in the GCN learning process.
arXiv Detail & Related papers (2020-04-09T16:26:43Z) - Spectral Graph Attention Network with Fast Eigen-approximation [103.93113062682633]
Spectral Graph Attention Network (SpGAT) learns representations for different frequency components regarding weighted filters and graph wavelets bases.
Fast approximation variant SpGAT-Cheby is proposed to reduce the computational cost brought by the eigen-decomposition.
We thoroughly evaluate the performance of SpGAT and SpGAT-Cheby in semi-supervised node classification tasks.
arXiv Detail & Related papers (2020-03-16T21:49:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.