Graph Triple Attention Network: A Decoupled Perspective
- URL: http://arxiv.org/abs/2408.07654v1
- Date: Wed, 14 Aug 2024 16:29:07 GMT
- Title: Graph Triple Attention Network: A Decoupled Perspective
- Authors: Xiaotang Wang, Yun Zhu, Haizhou Shi, Yongchao Liu, Chuntao Hong,
- Abstract summary: Graph Transformers face two primary challenges: multi-view chaos and local-global chaos.
We propose a high-level decoupled perspective of GTs, breaking them down into three components and two interaction levels.
We design a decoupled graph triple attention network named DeGTA, which separately computes multi-view attentions and adaptively integrates multi-view local and global information.
- Score: 8.958483386270638
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Transformers (GTs) have recently achieved significant success in the graph domain by effectively capturing both long-range dependencies and graph inductive biases. However, these methods face two primary challenges: (1) multi-view chaos, which results from coupling multi-view information (positional, structural, attribute), thereby impeding flexible usage and the interpretability of the propagation process. (2) local-global chaos, which arises from coupling local message passing with global attention, leading to issues of overfitting and over-globalizing. To address these challenges, we propose a high-level decoupled perspective of GTs, breaking them down into three components and two interaction levels: positional attention, structural attention, and attribute attention, alongside local and global interaction. Based on this decoupled perspective, we design a decoupled graph triple attention network named DeGTA, which separately computes multi-view attentions and adaptively integrates multi-view local and global information. This approach offers three key advantages: enhanced interpretability, flexible design, and adaptive integration of local and global information. Through extensive experiments, DeGTA achieves state-of-the-art performance across various datasets and tasks, including node classification and graph classification. Comprehensive ablation studies demonstrate that decoupling is essential for improving performance and enhancing interpretability. Our code is available at: https://github.com/wangxiaotang0906/DeGTA
Related papers
- SGFormer: Single-Layer Graph Transformers with Approximation-Free Linear Complexity [74.51827323742506]
We evaluate the necessity of adopting multi-layer attentions in Transformers on graphs.
We show that one-layer propagation can be reduced to one-layer propagation, with the same capability for representation learning.
It suggests a new technical path for building powerful and efficient Transformers on graphs.
arXiv Detail & Related papers (2024-09-13T17:37:34Z) - Bridging Local Details and Global Context in Text-Attributed Graphs [62.522550655068336]
GraphBridge is a framework that bridges local and global perspectives by leveraging contextual textual information.
Our method achieves state-of-theart performance, while our graph-aware token reduction module significantly enhances efficiency and solves scalability issues.
arXiv Detail & Related papers (2024-06-18T13:35:25Z) - Graph Transformers for Large Graphs [57.19338459218758]
This work advances representation learning on single large-scale graphs with a focus on identifying model characteristics and critical design constraints.
A key innovation of this work lies in the creation of a fast neighborhood sampling technique coupled with a local attention mechanism.
We report a 3x speedup and 16.8% performance gain on ogbn-products and snap-patents, while we also scale LargeGT on ogbn-100M with a 5.9% performance improvement.
arXiv Detail & Related papers (2023-12-18T11:19:23Z) - Transitivity-Preserving Graph Representation Learning for Bridging Local
Connectivity and Role-based Similarity [2.5252594834159643]
We propose Unified Graph Transformer Networks (UGT) that integrate local and global structural information into fixed-length vector representations.
First, UGT learns local structure by identifying the local substructures and aggregating features of the $k$-hop neighborhoods of each node.
Third, UGT learns unified representations through self-attention, encoding structural distance and $p$-step transition probability between node pairs.
arXiv Detail & Related papers (2023-08-18T12:49:57Z) - Learning Strong Graph Neural Networks with Weak Information [64.64996100343602]
We develop a principled approach to the problem of graph learning with weak information (GLWI)
We propose D$2$PT, a dual-channel GNN framework that performs long-range information propagation on the input graph with incomplete structure, but also on a global graph that encodes global semantic similarities.
arXiv Detail & Related papers (2023-05-29T04:51:09Z) - Interweaved Graph and Attention Network for 3D Human Pose Estimation [15.699524854176644]
We propose a novel Interweaved Graph and Attention Network (IGANet)
IGANet allows bidirectional communications between graph convolutional networks (GCNs) and attentions.
We introduce an IGA module, where attentions are provided with local information from GCNs and GCNs are injected with global information from attentions.
arXiv Detail & Related papers (2023-04-27T09:21:15Z) - DigNet: Digging Clues from Local-Global Interactive Graph for
Aspect-level Sentiment Classification [0.685316573653194]
In aspect-level sentiment classification (ASC), state-of-the-art models encode either syntax graph or relation graph.
We design a novel local-global interactive graph, which marries their advantages by stitching the two graphs via interactive edges.
In this paper, we propose a novel neural network termed DigNet, whose core module is the stacked local-global interactive layers.
arXiv Detail & Related papers (2022-01-04T05:34:02Z) - Graph Representation Learning via Contrasting Cluster Assignments [57.87743170674533]
We propose a novel unsupervised graph representation model by contrasting cluster assignments, called as GRCCA.
It is motivated to make good use of local and global information synthetically through combining clustering algorithms and contrastive learning.
GRCCA has strong competitiveness in most tasks.
arXiv Detail & Related papers (2021-12-15T07:28:58Z) - Multi-Level Graph Convolutional Network with Automatic Graph Learning
for Hyperspectral Image Classification [63.56018768401328]
We propose a Multi-level Graph Convolutional Network (GCN) with Automatic Graph Learning method (MGCN-AGL) for HSI classification.
By employing attention mechanism to characterize the importance among spatially neighboring regions, the most relevant information can be adaptively incorporated to make decisions.
Our MGCN-AGL encodes the long range dependencies among image regions based on the expressive representations that have been produced at local level.
arXiv Detail & Related papers (2020-09-19T09:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.