Simplifying GNN Performance with Low Rank Kernel Models
- URL: http://arxiv.org/abs/2310.05250v1
- Date: Sun, 8 Oct 2023 17:56:30 GMT
- Title: Simplifying GNN Performance with Low Rank Kernel Models
- Authors: Luciano Vinas and Arash A. Amini
- Abstract summary: We revisit recent spectral GNN approaches to semi-supervised node classification (SSNC)
We show that recent performance improvements in GNN approaches may be partially attributed to shifts in evaluation conventions.
- Score: 14.304623719903972
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We revisit recent spectral GNN approaches to semi-supervised node
classification (SSNC). We posit that many of the current GNN architectures may
be over-engineered. Instead, simpler, traditional methods from nonparametric
estimation, applied in the spectral domain, could replace many deep-learning
inspired GNN designs. These conventional techniques appear to be well suited
for a variety of graph types reaching state-of-the-art performance on many of
the common SSNC benchmarks. Additionally, we show that recent performance
improvements in GNN approaches may be partially attributed to shifts in
evaluation conventions. Lastly, an ablative study is conducted on the various
hyperparameters associated with GNN spectral filtering techniques. Code
available at: https://github.com/lucianoAvinas/lowrank-gnn-kernels
Related papers
- Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification [7.14327815822376]
Graph Transformers (GTs) have emerged as popular alternatives to traditional Graph Neural Networks (GNNs)
In this paper, we reevaluate the performance of three classic GNN models (GCN, GAT, and GraphSAGE) against GTs.
arXiv Detail & Related papers (2024-06-13T10:53:33Z) - GNN-Ensemble: Towards Random Decision Graph Neural Networks [3.7620848582312405]
Graph Neural Networks (GNNs) have enjoyed wide spread applications in graph-structured data.
GNNs are required to learn latent patterns from a limited amount of training data to perform inferences on a vast amount of test data.
In this paper, we push one step forward on the ensemble learning of GNNs with improved accuracy, robustness, and adversarial attacks.
arXiv Detail & Related papers (2023-03-20T18:24:01Z) - Graph Neural Networks are Inherently Good Generalizers: Insights by
Bridging GNNs and MLPs [71.93227401463199]
This paper pinpoints the major source of GNNs' performance gain to their intrinsic capability, by introducing an intermediate model class dubbed as P(ropagational)MLP.
We observe that PMLPs consistently perform on par with (or even exceed) their GNN counterparts, while being much more efficient in training.
arXiv Detail & Related papers (2022-12-18T08:17:32Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using
Vector Quantization [70.8567058758375]
VQ-GNN is a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance.
Our framework avoids the "neighbor explosion" problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix.
arXiv Detail & Related papers (2021-10-27T11:48:50Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - Optimization and Generalization Analysis of Transduction through
Gradient Boosting and Application to Multi-scale Graph Neural Networks [60.22494363676747]
It is known that the current graph neural networks (GNNs) are difficult to make themselves deep due to the problem known as over-smoothing.
Multi-scale GNNs are a promising approach for mitigating the over-smoothing problem.
We derive the optimization and generalization guarantees of transductive learning algorithms that include multi-scale GNNs.
arXiv Detail & Related papers (2020-06-15T17:06:17Z) - Self-Enhanced GNN: Improving Graph Neural Networks Using Model Outputs [20.197085398581397]
Graph neural networks (GNNs) have received much attention recently because of their excellent performance on graph-based tasks.
We propose self-enhanced GNN (SEG), which improves the quality of the input data using the outputs of existing GNN models.
SEG consistently improves the performance of well-known GNN models such as GCN, GAT and SGC across different datasets.
arXiv Detail & Related papers (2020-02-18T12:27:16Z) - Node Masking: Making Graph Neural Networks Generalize and Scale Better [71.51292866945471]
Graph Neural Networks (GNNs) have received a lot of interest in the recent times.
In this paper, we utilize some theoretical tools to better visualize the operations performed by state of the art spatial GNNs.
We introduce a simple concept, Node Masking, that allows them to generalize and scale better.
arXiv Detail & Related papers (2020-01-17T06:26:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.