Node2Seq: Towards Trainable Convolutions in Graph Neural Networks
- URL: http://arxiv.org/abs/2101.01849v1
- Date: Wed, 6 Jan 2021 03:05:37 GMT
- Title: Node2Seq: Towards Trainable Convolutions in Graph Neural Networks
- Authors: Hao Yuan, Shuiwang Ji
- Abstract summary: We propose a graph network layer, known as Node2Seq, to learn node embeddings with explicitly trainable weights for different neighboring nodes.
For a target node, our method sorts its neighboring nodes via attention mechanism and then employs 1D convolutional neural networks (CNNs) to enable explicit weights for information aggregation.
In addition, we propose to incorporate non-local information for feature learning in an adaptive manner based on the attention scores.
- Score: 59.378148590027735
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Investigating graph feature learning becomes essentially important with the
emergence of graph data in many real-world applications. Several graph neural
network approaches are proposed for node feature learning and they generally
follow a neighboring information aggregation scheme to learn node features.
While great performance has been achieved, the weights learning for different
neighboring nodes is still less explored. In this work, we propose a novel
graph network layer, known as Node2Seq, to learn node embeddings with
explicitly trainable weights for different neighboring nodes. For a target
node, our method sorts its neighboring nodes via attention mechanism and then
employs 1D convolutional neural networks (CNNs) to enable explicit weights for
information aggregation. In addition, we propose to incorporate non-local
information for feature learning in an adaptive manner based on the attention
scores. Experimental results demonstrate the effectiveness of our proposed
Node2Seq layer and show that the proposed adaptively non-local information
learning can improve the performance of feature learning.
Related papers
- GraphRARE: Reinforcement Learning Enhanced Graph Neural Network with Relative Entropy [21.553180564868306]
GraphRARE is a framework built upon node relative entropy and deep reinforcement learning.
An innovative node relative entropy is used to measure mutual information between node pairs.
A deep reinforcement learning-based algorithm is developed to optimize the graph topology.
arXiv Detail & Related papers (2023-12-15T11:30:18Z) - Graph Neural Networks Provably Benefit from Structural Information: A
Feature Learning Perspective [53.999128831324576]
Graph neural networks (GNNs) have pioneered advancements in graph representation learning.
This study investigates the role of graph convolution within the context of feature learning theory.
arXiv Detail & Related papers (2023-06-24T10:21:11Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Enhancing Intra-class Information Extraction for Heterophilous Graphs:
One Neural Architecture Search Approach [41.84399177525008]
We propose IIE-GNN (Intra-class Information Enhanced Graph Neural Networks) to achieve two improvements.
A unified framework is proposed based on the literature, in which the intra-class information from the node itself and neighbors can be extracted.
We also conduct experiments to show that IIE-GNN can improve the model performance by designing node-wise GNNs to enhance intra-class information extraction.
arXiv Detail & Related papers (2022-11-20T14:37:09Z) - Robust Knowledge Adaptation for Dynamic Graph Neural Networks [61.8505228728726]
We propose Ada-DyGNN: a robust knowledge Adaptation framework via reinforcement learning for Dynamic Graph Neural Networks.
Our approach constitutes the first attempt to explore robust knowledge adaptation via reinforcement learning.
Experiments on three benchmark datasets demonstrate that Ada-DyGNN achieves the state-of-the-art performance.
arXiv Detail & Related papers (2022-07-22T02:06:53Z) - Measuring and Sampling: A Metric-guided Subgraph Learning Framework for
Graph Neural Network [11.017348743924426]
We propose a Metric-Guided (MeGuide) subgraph learning framework for Graph neural network (GNN)
MeGuide employs two novel metrics: Feature Smoothness and Connection Failure Distance to guide the subgraph sampling and mini-batch based training.
We demonstrate the effectiveness and efficiency of MeGuide in training various GNNs on multiple datasets.
arXiv Detail & Related papers (2021-12-30T11:00:00Z) - Inferential SIR-GN: Scalable Graph Representation Learning [0.4699313647907615]
Graph representation learning methods generate numerical vector representations for the nodes in a network.
In this work, we propose Inferential SIR-GN, a model which is pre-trained on random graphs, then computes node representations rapidly.
We demonstrate that the model is able to capture node's structural role information, and show excellent performance at node and graph classification tasks, on unseen networks.
arXiv Detail & Related papers (2021-11-08T20:56:37Z) - Noise-robust Graph Learning by Estimating and Leveraging Pairwise
Interactions [123.07967420310796]
This paper bridges the gap by proposing a pairwise framework for noisy node classification on graphs.
PI-GNN relies on the PI as a primary learning proxy in addition to the pointwise learning from the noisy node class labels.
Our proposed framework PI-GNN contributes two novel components: (1) a confidence-aware PI estimation model that adaptively estimates the PI labels, and (2) a decoupled training approach that leverages the estimated PI labels.
arXiv Detail & Related papers (2021-06-14T14:23:08Z) - Uniting Heterogeneity, Inductiveness, and Efficiency for Graph
Representation Learning [68.97378785686723]
graph neural networks (GNNs) have greatly advanced the performance of node representation learning on graphs.
A majority class of GNNs are only designed for homogeneous graphs, leading to inferior adaptivity to the more informative heterogeneous graphs.
We propose a novel inductive, meta path-free message passing scheme that packs up heterogeneous node features with their associated edges from both low- and high-order neighbor nodes.
arXiv Detail & Related papers (2021-04-04T23:31:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.