Node Feature Kernels Increase Graph Convolutional Network Robustness
- URL: http://arxiv.org/abs/2109.01785v1
- Date: Sat, 4 Sep 2021 04:20:45 GMT
- Title: Node Feature Kernels Increase Graph Convolutional Network Robustness
- Authors: Mohamed El Amine Seddik, Changmin Wu, Johannes F. Lutzeyer and
Michalis Vazirgiannis
- Abstract summary: The robustness of Graph Convolutional Networks (GCNs) to perturbations of their input is becoming a topic of increasing importance.
In this paper, a random matrix theory analysis is possible.
It is observed that enhancing the message passing step in GCNs by adding the node feature kernel to the adjacency matrix of the graph structure solves this problem.
- Score: 19.076912727990326
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The robustness of the much-used Graph Convolutional Networks (GCNs) to
perturbations of their input is becoming a topic of increasing importance. In
this paper, the random GCN is introduced for which a random matrix theory
analysis is possible. This analysis suggests that if the graph is sufficiently
perturbed, or in the extreme case random, then the GCN fails to benefit from
the node features. It is furthermore observed that enhancing the message
passing step in GCNs by adding the node feature kernel to the adjacency matrix
of the graph structure solves this problem. An empirical study of a GCN
utilised for node classification on six real datasets further confirms the
theoretical findings and demonstrates that perturbations of the graph structure
can result in GCNs performing significantly worse than Multi-Layer Perceptrons
run on the node features alone. In practice, adding a node feature kernel to
the message passing of perturbed graphs results in a significant improvement of
the GCN's performance, thereby rendering it more robust to graph perturbations.
Our code is publicly available at:https://github.com/ChangminWu/RobustGCN.
Related papers
- Self-Attention Empowered Graph Convolutional Network for Structure
Learning and Node Embedding [5.164875580197953]
In representation learning on graph-structured data, many popular graph neural networks (GNNs) fail to capture long-range dependencies.
This paper proposes a novel graph learning framework called the graph convolutional network with self-attention (GCN-SA)
The proposed scheme exhibits an exceptional generalization capability in node-level representation learning.
arXiv Detail & Related papers (2024-03-06T05:00:31Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - What Do Graph Convolutional Neural Networks Learn? [0.0]
Graph Convolutional Neural Networks (GCN) are a common variant of Graph neural networks (GNNs)
Recent literature has highlighted that GCNs can achieve strong performance on heterophilous graphs under certain "special conditions"
Our investigation on underlying graph structures of a dataset finds that a GCN's SSNC performance is significantly influenced by the consistency and uniqueness in neighborhood structure of nodes within a class.
arXiv Detail & Related papers (2022-07-05T06:44:37Z) - ResNorm: Tackling Long-tailed Degree Distribution Issue in Graph Neural
Networks via Normalization [80.90206641975375]
This paper focuses on improving the performance of GNNs via normalization.
By studying the long-tailed distribution of node degrees in the graph, we propose a novel normalization method for GNNs.
The $scale$ operation of ResNorm reshapes the node-wise standard deviation (NStd) distribution so as to improve the accuracy of tail nodes.
arXiv Detail & Related papers (2022-06-16T13:49:09Z) - SStaGCN: Simplified stacking based graph convolutional networks [2.556756699768804]
Graph convolutional network (GCN) is a powerful model studied broadly in various graph structural data learning tasks.
We propose a novel GCN called SStaGCN (Simplified stacking based GCN) by utilizing the ideas of stacking and aggregation.
We show that SStaGCN can efficiently mitigate the over-smoothing problem of GCN.
arXiv Detail & Related papers (2021-11-16T05:00:08Z) - Graph Convolutional Networks for Graphs Containing Missing Features [5.426650977249329]
We propose an approach that adapts Graph Convolutional Network (GCN) to graphs containing missing features.
In contrast to traditional strategy, our approach integrates the processing of missing features and graph learning within the same neural network architecture.
We demonstrate through extensive experiments that our approach significantly outperforms the imputation-based methods in node classification and link prediction tasks.
arXiv Detail & Related papers (2020-07-09T06:47:21Z) - DeeperGCN: All You Need to Train Deeper GCNs [66.64739331859226]
Graph Convolutional Networks (GCNs) have been drawing significant attention with the power of representation learning on graphs.
Unlike Convolutional Neural Networks (CNNs), which are able to take advantage of stacking very deep layers, GCNs suffer from vanishing gradient, over-smoothing and over-fitting issues when going deeper.
This paper proposes DeeperGCN that is capable of successfully and reliably training very deep GCNs.
arXiv Detail & Related papers (2020-06-13T23:00:22Z) - Understanding and Resolving Performance Degradation in Graph
Convolutional Networks [105.14867349802898]
Graph Convolutional Network (GCN) stacks several layers and in each layer performs a PROPagation operation (PROP) and a TRANsformation operation (TRAN) for learning node representations over graph-structured data.
GCNs tend to suffer performance drop when the model gets deep.
We study performance degradation of GCNs by experimentally examining how stacking only TRANs or PROPs works.
arXiv Detail & Related papers (2020-06-12T12:12:12Z) - Graph Highway Networks [77.38665506495553]
Graph Convolution Networks (GCN) are widely used in learning graph representations due to their effectiveness and efficiency.
They suffer from the notorious over-smoothing problem, in which the learned representations converge to alike vectors when many layers are stacked.
We propose Graph Highway Networks (GHNet) which utilize gating units to balance the trade-off between homogeneity and heterogeneity in the GCN learning process.
arXiv Detail & Related papers (2020-04-09T16:26:43Z) - Gated Graph Recurrent Neural Networks [176.3960927323358]
We introduce Graph Recurrent Neural Networks (GRNNs) as a general learning framework for graph processes.
To address the problem of vanishing gradients, we put forward GRNNs with three different gating mechanisms: time, node and edge gates.
The numerical results also show that GRNNs outperform GNNs and RNNs, highlighting the importance of taking both the temporal and graph structures of a graph process into account.
arXiv Detail & Related papers (2020-02-03T22:35:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.