Adaptive Graph Diffusion Networks with Hop-wise Attention
- URL: http://arxiv.org/abs/2012.15024v1
- Date: Wed, 30 Dec 2020 03:43:04 GMT
- Title: Adaptive Graph Diffusion Networks with Hop-wise Attention
- Authors: Chuxiong Sun, Guoshi Wu
- Abstract summary: We propose Adaptive Graph Diffusion Networks with Hop-wise Attention (AGDNs-HA) to incorporate deeper information.
We show that our proposed methods achieve significant improvements on the standard dataset with semi-supervised node classification task.
- Score: 1.2183405753834562
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have received much attention recent years and
have achieved state-of-the-art performances in many fields. The deeper GNNs can
theoretically capture deeper neighborhood information. However, they often
suffer from problems of over-fitting and over-smoothing. In order to
incorporate deeper information while preserving considerable complexity and
generalization ability, we propose Adaptive Graph Diffusion Networks with
Hop-wise Attention (AGDNs-HA). We stack multi-hop neighborhood aggregations of
different orders into single layer. Then we integrate them with the help of
hop-wise attention, which is learnable and adaptive for each node. Experimental
results on the standard dataset with semi-supervised node classification task
show that our proposed methods achieve significant improvements.
Related papers
- DA-MoE: Addressing Depth-Sensitivity in Graph-Level Analysis through Mixture of Experts [70.21017141742763]
Graph neural networks (GNNs) are gaining popularity for processing graph-structured data.
Existing methods generally use a fixed number of GNN layers to generate representations for all graphs.
We propose the depth adaptive mixture of expert (DA-MoE) method, which incorporates two main improvements to GNN.
arXiv Detail & Related papers (2024-11-05T11:46:27Z) - Learning Personalized Scoping for Graph Neural Networks under Heterophily [3.475704621679017]
Heterophilous graphs, where dissimilar nodes tend to connect, pose a challenge for graph neural networks (GNNs)
We formalize personalized scoping as a separate scope classification problem that overcomes GNN overfitting in node classification.
We propose Adaptive Scope (AS), a lightweight approach that only participates in GNN inference.
arXiv Detail & Related papers (2024-09-11T04:13:39Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Graph Mixture of Experts: Learning on Large-Scale Graphs with Explicit
Diversity Modeling [60.0185734837814]
Graph neural networks (GNNs) have found extensive applications in learning from graph data.
To bolster the generalization capacity of GNNs, it has become customary to augment training graph structures with techniques like graph augmentations.
This study introduces the concept of Mixture-of-Experts (MoE) to GNNs, with the aim of augmenting their capacity to adapt to a diverse range of training graph structures.
arXiv Detail & Related papers (2023-04-06T01:09:36Z) - From Node Interaction to Hop Interaction: New Effective and Scalable
Graph Learning Paradigm [25.959580336262004]
We propose a novel hop interaction paradigm to address limitations simultaneously.
The core idea is to convert the interaction target among nodes to pre-processed multi-hop features inside each node.
We conduct extensive experiments on 12 benchmark datasets in a wide range of domains, scales, and smoothness of graphs.
arXiv Detail & Related papers (2022-11-21T11:29:48Z) - Graph Neural Networks with Adaptive Frequency Response Filter [55.626174910206046]
We develop a graph neural network framework AdaGNN with a well-smooth adaptive frequency response filter.
We empirically validate the effectiveness of the proposed framework on various benchmark datasets.
arXiv Detail & Related papers (2021-04-26T19:31:21Z) - Enhance Information Propagation for Graph Neural Network by
Heterogeneous Aggregations [7.3136594018091134]
Graph neural networks are emerging as continuation of deep learning success w.r.t. graph data.
We propose to enhance information propagation among GNN layers by combining heterogeneous aggregations.
We empirically validate the effectiveness of HAG-Net on a number of graph classification benchmarks.
arXiv Detail & Related papers (2021-02-08T08:57:56Z) - Node2Seq: Towards Trainable Convolutions in Graph Neural Networks [59.378148590027735]
We propose a graph network layer, known as Node2Seq, to learn node embeddings with explicitly trainable weights for different neighboring nodes.
For a target node, our method sorts its neighboring nodes via attention mechanism and then employs 1D convolutional neural networks (CNNs) to enable explicit weights for information aggregation.
In addition, we propose to incorporate non-local information for feature learning in an adaptive manner based on the attention scores.
arXiv Detail & Related papers (2021-01-06T03:05:37Z) - Policy-GNN: Aggregation Optimization for Graph Neural Networks [60.50932472042379]
Graph neural networks (GNNs) aim to model the local graph structures and capture the hierarchical patterns by aggregating the information from neighbors.
It is a challenging task to develop an effective aggregation strategy for each node, given complex graphs and sparse features.
We propose Policy-GNN, a meta-policy framework that models the sampling procedure and message passing of GNNs into a combined learning process.
arXiv Detail & Related papers (2020-06-26T17:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.