Hop-Aware Dimension Optimization for Graph Neural Networks
- URL: http://arxiv.org/abs/2105.14490v1
- Date: Sun, 30 May 2021 10:12:56 GMT
- Title: Hop-Aware Dimension Optimization for Graph Neural Networks
- Authors: Ailing Zeng, Minhao Liu, Zhiwei Liu, Ruiyuan Gao, Qiang Xu
- Abstract summary: We propose a simple yet effective ladder-style GNN architecture, namely LADDER-GNN.
Specifically, we separate messages from different hops and assign different dimensions for them before concatenating them to obtain the node representation.
Results show that the proposed simple hop-aware representation learning solution can achieve state-of-the-art performance on most datasets.
- Score: 11.341455005324104
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In Graph Neural Networks (GNNs), the embedding of each node is obtained by
aggregating information with its direct and indirect neighbors. As the messages
passed among nodes contain both information and noise, the critical issue in
GNN representation learning is how to retrieve information effectively while
suppressing noise. Generally speaking, interactions with distant nodes usually
introduce more noise for a particular node than those with close nodes.
However, in most existing works, the messages being passed among nodes are
mingled together, which is inefficient from a communication perspective. Mixing
the information from clean sources (low-order neighbors) and noisy sources
(high-order neighbors) makes discriminative feature extraction challenging.
Motivated by the above, we propose a simple yet effective ladder-style GNN
architecture, namely LADDER-GNN. Specifically, we separate messages from
different hops and assign different dimensions for them before concatenating
them to obtain the node representation. Such disentangled representations
facilitate extracting information from messages passed from different hops, and
their corresponding dimensions are determined with a reinforcement
learning-based neural architecture search strategy. The resulted hop-aware
representations generally contain more dimensions for low-order neighbors and
fewer dimensions for high-order neighbors, leading to a ladder-style
aggregation scheme. We verify the proposed LADDER-GNN on several
semi-supervised node classification datasets. Experimental results show that
the proposed simple hop-aware representation learning solution can achieve
state-of-the-art performance on most datasets.
Related papers
- NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - NDGGNET-A Node Independent Gate based Graph Neural Networks [6.155450481110693]
For nodes with sparse connectivity, it is difficult to obtain enough information through a single GNN layer.
In this thesis, we define a novel framework that allows the normal GNN model to accommodate more layers.
Experimental results show that our proposed model can effectively increase the model depth and perform well on several datasets.
arXiv Detail & Related papers (2022-05-11T08:51:04Z) - Graph Ordering Attention Networks [22.468776559433614]
Graph Neural Networks (GNNs) have been successfully used in many problems involving graph-structured data.
We introduce the Graph Ordering Attention (GOAT) layer, a novel GNN component that captures interactions between nodes in a neighborhood.
GOAT layer demonstrates its increased performance in modeling graph metrics that capture complex information.
arXiv Detail & Related papers (2022-04-11T18:13:19Z) - Graph Pointer Neural Networks [11.656981519694218]
We present Graph Pointer Neural Networks (GPNN) to tackle the challenges mentioned above.
We leverage a pointer network to select the most relevant nodes from a large amount of multi-hop neighborhoods.
The GPNN significantly improves the classification performance of state-of-the-art methods.
arXiv Detail & Related papers (2021-10-03T10:18:25Z) - Reasoning Graph Networks for Kinship Verification: from Star-shaped to
Hierarchical [85.0376670244522]
We investigate the problem of facial kinship verification by learning hierarchical reasoning graph networks.
We develop a Star-shaped Reasoning Graph Network (S-RGN) to exploit more powerful and flexible capacity.
We also develop a Hierarchical Reasoning Graph Network (H-RGN) to exploit more powerful and flexible capacity.
arXiv Detail & Related papers (2021-09-06T03:16:56Z) - Higher-Order Attribute-Enhancing Heterogeneous Graph Neural Networks [67.25782890241496]
We propose a higher-order Attribute-Enhancing Graph Neural Network (HAEGNN) for heterogeneous network representation learning.
HAEGNN simultaneously incorporates meta-paths and meta-graphs for rich, heterogeneous semantics.
It shows superior performance against the state-of-the-art methods in node classification, node clustering, and visualization.
arXiv Detail & Related papers (2021-04-16T04:56:38Z) - Node2Seq: Towards Trainable Convolutions in Graph Neural Networks [59.378148590027735]
We propose a graph network layer, known as Node2Seq, to learn node embeddings with explicitly trainable weights for different neighboring nodes.
For a target node, our method sorts its neighboring nodes via attention mechanism and then employs 1D convolutional neural networks (CNNs) to enable explicit weights for information aggregation.
In addition, we propose to incorporate non-local information for feature learning in an adaptive manner based on the attention scores.
arXiv Detail & Related papers (2021-01-06T03:05:37Z) - NCGNN: Node-level Capsule Graph Neural Network [45.23653314235767]
Node-level Capsule Graph Neural Network (NCGNN) represents nodes as groups of capsules.
novel dynamic routing procedure is developed to adaptively select appropriate capsules for aggregation.
NCGNN can well address the over-smoothing issue and outperforms the state of the arts by producing better node embeddings for classification.
arXiv Detail & Related papers (2020-12-07T06:46:17Z) - GAIN: Graph Attention & Interaction Network for Inductive
Semi-Supervised Learning over Large-scale Graphs [18.23435958000212]
Graph Neural Networks (GNNs) have led to state-of-the-art performance on a variety of machine learning tasks such as recommendation, node classification and link prediction.
Most existing GNN models exploit a single type of aggregator to aggregate neighboring nodes information.
We propose a novel graph neural network architecture, Graph Attention & Interaction Network (GAIN), for inductive learning on graphs.
arXiv Detail & Related papers (2020-11-03T00:20:24Z) - Policy-GNN: Aggregation Optimization for Graph Neural Networks [60.50932472042379]
Graph neural networks (GNNs) aim to model the local graph structures and capture the hierarchical patterns by aggregating the information from neighbors.
It is a challenging task to develop an effective aggregation strategy for each node, given complex graphs and sparse features.
We propose Policy-GNN, a meta-policy framework that models the sampling procedure and message passing of GNNs into a combined learning process.
arXiv Detail & Related papers (2020-06-26T17:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.