You Can Have Better Graph Neural Networks by Not Training Weights at
All: Finding Untrained GNNs Tickets
- URL: http://arxiv.org/abs/2211.15335v5
- Date: Sun, 4 Feb 2024 19:08:53 GMT
- Title: You Can Have Better Graph Neural Networks by Not Training Weights at
All: Finding Untrained GNNs Tickets
- Authors: Tianjin Huang, Tianlong Chen, Meng Fang, Vlado Menkovski, Jiaxu Zhao,
Lu Yin, Yulong Pei, Decebal Constantin Mocanu, Zhangyang Wang, Mykola
Pechenizkiy, Shiwei Liu
- Abstract summary: Untrainedworks in graph neural networks (GNNs) still remains mysterious.
We show that the found untrainedworks can substantially mitigate the GNN over-smoothing problem.
We also observe that such sparse untrainedworks have appealing performance in out-of-distribution detection and robustness of input perturbations.
- Score: 105.24703398193843
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent works have impressively demonstrated that there exists a subnetwork in
randomly initialized convolutional neural networks (CNNs) that can match the
performance of the fully trained dense networks at initialization, without any
optimization of the weights of the network (i.e., untrained networks). However,
the presence of such untrained subnetworks in graph neural networks (GNNs)
still remains mysterious. In this paper we carry out the first-of-its-kind
exploration of discovering matching untrained GNNs. With sparsity as the core
tool, we can find \textit{untrained sparse subnetworks} at the initialization,
that can match the performance of \textit{fully trained dense} GNNs. Besides
this already encouraging finding of comparable performance, we show that the
found untrained subnetworks can substantially mitigate the GNN over-smoothing
problem, hence becoming a powerful tool to enable deeper GNNs without bells and
whistles. We also observe that such sparse untrained subnetworks have appealing
performance in out-of-distribution detection and robustness of input
perturbations. We evaluate our method across widely-used GNN architectures on
various popular datasets including the Open Graph Benchmark (OGB).
Related papers
- Training-free Graph Neural Networks and the Power of Labels as Features [17.912507269030577]
Training-free graph neural networks (TFGNNs) can be used without training and can also be improved with optional training.
We show that LaF provably enhances the expressive power of graph neural networks.
In the experiments, we confirm that TFGNNs outperform existing GNNs in the training-free setting and converge with much fewer training iterations than traditional GNNs.
arXiv Detail & Related papers (2024-04-30T06:36:43Z) - Classifying Nodes in Graphs without GNNs [50.311528896010785]
We propose a fully GNN-free approach for node classification, not requiring them at train or test time.
Our method consists of three key components: smoothness constraints, pseudo-labeling iterations and neighborhood-label histograms.
arXiv Detail & Related papers (2024-02-08T18:59:30Z) - Multicoated and Folded Graph Neural Networks with Strong Lottery Tickets [3.0894823679470087]
This paper introduces the Multi-Stage Folding and Unshared Masks methods to expand the search space in terms of both architecture and parameters.
By achieving high sparsity, competitive performance, and high memory efficiency with up to 98.7% reduction, it demonstrates suitability for energy-efficient graph processing.
arXiv Detail & Related papers (2023-12-06T02:16:44Z) - GNN-Ensemble: Towards Random Decision Graph Neural Networks [3.7620848582312405]
Graph Neural Networks (GNNs) have enjoyed wide spread applications in graph-structured data.
GNNs are required to learn latent patterns from a limited amount of training data to perform inferences on a vast amount of test data.
In this paper, we push one step forward on the ensemble learning of GNNs with improved accuracy, robustness, and adversarial attacks.
arXiv Detail & Related papers (2023-03-20T18:24:01Z) - Distributed Graph Neural Network Training: A Survey [51.77035975191926]
Graph neural networks (GNNs) are a type of deep learning models that are trained on graphs and have been successfully applied in various domains.
Despite the effectiveness of GNNs, it is still challenging for GNNs to efficiently scale to large graphs.
As a remedy, distributed computing becomes a promising solution of training large-scale GNNs.
arXiv Detail & Related papers (2022-11-01T01:57:00Z) - Training Graph Neural Networks with 1000 Layers [133.84813995275988]
We study reversible connections, group convolutions, weight tying, and equilibrium models to advance the memory and parameter efficiency of GNNs.
To the best of our knowledge, RevGNN-Deep is the deepest GNN in the literature by one order of magnitude.
arXiv Detail & Related papers (2021-06-14T15:03:00Z) - Edgeless-GNN: Unsupervised Inductive Edgeless Network Embedding [7.391641422048645]
We study the problem of embedding edgeless nodes such as users who newly enter the underlying network.
We propose Edgeless-GNN, a new framework that enables GNNs to generate node embeddings even for edgeless nodes through unsupervised inductive learning.
arXiv Detail & Related papers (2021-04-12T06:37:31Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - Overcoming Catastrophic Forgetting in Graph Neural Networks [50.900153089330175]
Catastrophic forgetting refers to the tendency that a neural network "forgets" the previous learned knowledge upon learning new tasks.
We propose a novel scheme dedicated to overcoming this problem and hence strengthen continual learning in graph neural networks (GNNs)
At the heart of our approach is a generic module, termed as topology-aware weight preserving(TWP)
arXiv Detail & Related papers (2020-12-10T22:30:25Z) - Reducing Communication in Graph Neural Network Training [0.0]
Graph Neural Networks (GNNs) are powerful and flexible neural networks that use the naturally sparse connectivity information of the data.
We introduce a family of parallel algorithms for training GNNs and show that they canally reduce communication compared to previous parallel GNN training methods.
arXiv Detail & Related papers (2020-05-07T07:45:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.