Node Masking: Making Graph Neural Networks Generalize and Scale Better
- URL: http://arxiv.org/abs/2001.07524v4
- Date: Sun, 16 May 2021 19:40:24 GMT
- Title: Node Masking: Making Graph Neural Networks Generalize and Scale Better
- Authors: Pushkar Mishra, Aleksandra Piktus, Gerard Goossen, Fabrizio Silvestri
- Abstract summary: Graph Neural Networks (GNNs) have received a lot of interest in the recent times.
In this paper, we utilize some theoretical tools to better visualize the operations performed by state of the art spatial GNNs.
We introduce a simple concept, Node Masking, that allows them to generalize and scale better.
- Score: 71.51292866945471
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have received a lot of interest in the recent
times. From the early spectral architectures that could only operate on
undirected graphs per a transductive learning paradigm to the current state of
the art spatial ones that can apply inductively to arbitrary graphs, GNNs have
seen significant contributions from the research community. In this paper, we
utilize some theoretical tools to better visualize the operations performed by
state of the art spatial GNNs. We analyze the inner workings of these
architectures and introduce a simple concept, Node Masking, that allows them to
generalize and scale better. To empirically validate the concept, we perform
several experiments on some widely-used datasets for node classification in
both the transductive and inductive settings, hence laying down strong
benchmarks for future research.
Related papers
- PROXI: Challenging the GNNs for Link Prediction [3.8233569758620063]
We introduce PROXI, which leverages proximity information of node pairs in both graph and attribute spaces.
Standard machine learning (ML) models perform competitively, even outperforming cutting-edge GNN models.
We show that augmenting traditional GNNs with PROXI significantly boosts their link prediction performance.
arXiv Detail & Related papers (2024-10-02T17:57:38Z) - Improved Image Classification with Manifold Neural Networks [13.02854405679453]
Graph Neural Networks (GNNs) have gained popularity in various learning tasks.
In this paper, we explore GNNs' potential in general data representations, especially in the image domain.
We train a GNN to predict node labels corresponding to the image labels in the classification task, and leverage convergence of GNNs to analyze GNN generalization.
arXiv Detail & Related papers (2024-09-19T19:55:33Z) - A Manifold Perspective on the Statistical Generalization of Graph Neural Networks [84.01980526069075]
We take a manifold perspective to establish the statistical generalization theory of GNNs on graphs sampled from a manifold in the spectral domain.
We prove that the generalization bounds of GNNs decrease linearly with the size of the graphs in the logarithmic scale, and increase linearly with the spectral continuity constants of the filter functions.
arXiv Detail & Related papers (2024-06-07T19:25:02Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Bag of Tricks for Training Deeper Graph Neural Networks: A Comprehensive
Benchmark Study [100.27567794045045]
Training deep graph neural networks (GNNs) is notoriously hard.
We present the first fair and reproducible benchmark dedicated to assessing the "tricks" of training deep GNNs.
arXiv Detail & Related papers (2021-08-24T05:00:37Z) - Transfer Learning of Graph Neural Networks with Ego-graph Information
Maximization [41.867290324754094]
Graph neural networks (GNNs) have achieved superior performance in various applications, but training dedicated GNNs can be costly for large-scale graphs.
In this work, we establish a theoretically grounded and practically useful framework for the transfer learning of GNNs.
arXiv Detail & Related papers (2020-09-11T02:31:18Z) - Hierarchical Message-Passing Graph Neural Networks [12.207978823927386]
We propose a novel Hierarchical Message-passing Graph Neural Networks framework.
Key idea is generating a hierarchical structure that re-organises all nodes in a flat graph into multi-level super graphs.
We present the first model to implement this framework, termed Hierarchical Community-aware Graph Neural Network (HC-GNN)
arXiv Detail & Related papers (2020-09-08T13:11:07Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.