Towards Deeper Graph Neural Networks
- URL: http://arxiv.org/abs/2007.09296v1
- Date: Sat, 18 Jul 2020 01:11:14 GMT
- Title: Towards Deeper Graph Neural Networks
- Authors: Meng Liu, Hongyang Gao, Shuiwang Ji
- Abstract summary: Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
- Score: 63.46470695525957
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks have shown significant success in the field of graph
representation learning. Graph convolutions perform neighborhood aggregation
and represent one of the most important graph operations. Nevertheless, one
layer of these neighborhood aggregation methods only consider immediate
neighbors, and the performance decreases when going deeper to enable larger
receptive fields. Several recent studies attribute this performance
deterioration to the over-smoothing issue, which states that repeated
propagation makes node representations of different classes indistinguishable.
In this work, we study this observation systematically and develop new insights
towards deeper graph neural networks. First, we provide a systematical analysis
on this issue and argue that the key factor compromising the performance
significantly is the entanglement of representation transformation and
propagation in current graph convolution operations. After decoupling these two
operations, deeper graph neural networks can be used to learn graph node
representations from larger receptive fields. We further provide a theoretical
analysis of the above observation when building very deep models, which can
serve as a rigorous and gentle description of the over-smoothing issue. Based
on our theoretical and empirical analysis, we propose Deep Adaptive Graph
Neural Network (DAGNN) to adaptively incorporate information from large
receptive fields. A set of experiments on citation, co-authorship, and
co-purchase datasets have confirmed our analysis and insights and demonstrated
the superiority of our proposed methods.
Related papers
- TANGNN: a Concise, Scalable and Effective Graph Neural Networks with Top-m Attention Mechanism for Graph Representation Learning [7.879217146851148]
We propose an innovative Graph Neural Network (GNN) architecture that integrates a Top-m attention mechanism aggregation component and a neighborhood aggregation component.
To assess the effectiveness of our proposed model, we have applied it to citation sentiment prediction, a novel task previously unexplored in the GNN field.
arXiv Detail & Related papers (2024-11-23T05:31:25Z) - Unitary convolutions for learning on graphs and groups [0.9899763598214121]
We study unitary group convolutions, which allow for deeper networks that are more stable during training.
The main focus of the paper are graph neural networks, where we show that unitary graph convolutions provably avoid over-smoothing.
Our experimental results confirm that unitary graph convolutional networks achieve competitive performance on benchmark datasets.
arXiv Detail & Related papers (2024-10-07T21:09:14Z) - On Discprecncies between Perturbation Evaluations of Graph Neural
Network Attributions [49.8110352174327]
We assess attribution methods from a perspective not previously explored in the graph domain: retraining.
The core idea is to retrain the network on important (or not important) relationships as identified by the attributions.
We run our analysis on four state-of-the-art GNN attribution methods and five synthetic and real-world graph classification datasets.
arXiv Detail & Related papers (2024-01-01T02:03:35Z) - A Comprehensive Survey on Deep Graph Representation Learning [26.24869157855632]
Graph representation learning aims to encode high-dimensional sparse graph-structured data into low-dimensional dense vectors.
Traditional methods have limited model capacity which limits the learning performance.
Deep graph representation learning has shown great potential and advantages over shallow (traditional) methods.
arXiv Detail & Related papers (2023-04-11T08:23:52Z) - An Empirical Study of Retrieval-enhanced Graph Neural Networks [48.99347386689936]
Graph Neural Networks (GNNs) are effective tools for graph representation learning.
We propose a retrieval-enhanced scheme called GRAPHRETRIEVAL, which is agnostic to the choice of graph neural network models.
We conduct comprehensive experiments over 13 datasets, and we observe that GRAPHRETRIEVAL is able to reach substantial improvements over existing GNNs.
arXiv Detail & Related papers (2022-06-01T09:59:09Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - How Neural Processes Improve Graph Link Prediction [35.652234989200956]
We propose a meta-learning approach with graph neural networks for link prediction: Neural Processes for Graph Neural Networks (NPGNN)
NPGNN can perform both transductive and inductive learning tasks and adapt to patterns in a large new graph after training with a small subgraph.
arXiv Detail & Related papers (2021-09-30T07:35:13Z) - Improving Graph Neural Networks with Simple Architecture Design [7.057970273958933]
We introduce several key design strategies for graph neural networks.
We present a simple and shallow model, Feature Selection Graph Neural Network (FSGNN)
We show that the proposed model outperforms other state of the art GNN models and achieves up to 64% improvements in accuracy on node classification tasks.
arXiv Detail & Related papers (2021-05-17T06:46:01Z) - Deep Learning for Learning Graph Representations [58.649784596090385]
Mining graph data has become a popular research topic in computer science.
The huge amount of network data has posed great challenges for efficient analysis.
This motivates the advent of graph representation which maps the graph into a low-dimension vector space.
arXiv Detail & Related papers (2020-01-02T02:13:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.