Do Not Train It: A Linear Neural Architecture Search of Graph Neural
Networks
- URL: http://arxiv.org/abs/2305.14065v3
- Date: Fri, 16 Jun 2023 10:33:21 GMT
- Title: Do Not Train It: A Linear Neural Architecture Search of Graph Neural
Networks
- Authors: Peng Xu, Lin Zhang, Xuanzhou Liu, Jiaqi Sun, Yue Zhao, Haiqin Yang,
Bei Yu
- Abstract summary: We develop a novel NAS-GNNs method, namely neural architecture coding (NAC)
Our approach leads to state-of-the-art performance, which is up to $200times$ faster and $18.8%$ more accurate than the strong baselines.
- Score: 15.823247346294089
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural architecture search (NAS) for Graph neural networks (GNNs), called
NAS-GNNs, has achieved significant performance over manually designed GNN
architectures. However, these methods inherit issues from the conventional NAS
methods, such as high computational cost and optimization difficulty. More
importantly, previous NAS methods have ignored the uniqueness of GNNs, where
GNNs possess expressive power without training. With the randomly-initialized
weights, we can then seek the optimal architecture parameters via the sparse
coding objective and derive a novel NAS-GNNs method, namely neural architecture
coding (NAC). Consequently, our NAC holds a no-update scheme on GNNs and can
efficiently compute in linear time. Empirical evaluations on multiple GNN
benchmark datasets demonstrate that our approach leads to state-of-the-art
performance, which is up to $200\times$ faster and $18.8\%$ more accurate than
the strong baselines.
Related papers
- NAS-BNN: Neural Architecture Search for Binary Neural Networks [55.058512316210056]
We propose a novel neural architecture search scheme for binary neural networks, named NAS-BNN.
Our discovered binary model family outperforms previous BNNs for a wide range of operations (OPs) from 20M to 200M.
In addition, we validate the transferability of these searched BNNs on the object detection task, and our binary detectors with the searched BNNs achieve a novel state-of-the-art result, e.g., 31.6% mAP with 370M OPs, on MS dataset.
arXiv Detail & Related papers (2024-08-28T02:17:58Z) - FR-NAS: Forward-and-Reverse Graph Predictor for Efficient Neural Architecture Search [10.699485270006601]
We introduce a novel Graph Neural Networks (GNN) predictor for Neural Architecture Search (NAS)
This predictor renders neural architectures into vector representations by combining both the conventional and inverse graph views.
The experimental results showcase a significant improvement in prediction accuracy, with a 3%--16% increase in Kendall-tau correlation.
arXiv Detail & Related papers (2024-04-24T03:22:49Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Architecture Augmentation for Performance Predictor Based on Graph
Isomorphism [15.478663248038307]
We propose an effective deep neural network (DNN) architecture augmentation method named GIAug.
We show that GIAug can significantly enhance the performance of most state-of-the-art peer predictors.
In addition, GIAug can save three magnitude order of computation cost at most on ImageNet.
arXiv Detail & Related papers (2022-07-03T09:04:09Z) - Training Graph Neural Networks with 1000 Layers [133.84813995275988]
We study reversible connections, group convolutions, weight tying, and equilibrium models to advance the memory and parameter efficiency of GNNs.
To the best of our knowledge, RevGNN-Deep is the deepest GNN in the literature by one order of magnitude.
arXiv Detail & Related papers (2021-06-14T15:03:00Z) - Search to aggregate neighborhood for graph neural network [47.47628113034479]
We propose a framework, which tries to Search to Aggregate NEighborhood (SANE) to automatically design data-specific GNN architectures.
By designing a novel and expressive search space, we propose a differentiable search algorithm, which is more efficient than previous reinforcement learning based methods.
arXiv Detail & Related papers (2021-04-14T03:15:19Z) - Scalable Neural Tangent Kernel of Recurrent Architectures [8.487185704099923]
Kernels derived from deep neural networks (DNNs) in the infinite-width provide high performance in a range of machine learning tasks.
We extend the family of kernels associated with recurrent neural networks (RNNs) to more complex architectures that are bidirectional RNNs and RNNs with average pooling.
arXiv Detail & Related papers (2020-12-09T04:36:34Z) - The Surprising Power of Graph Neural Networks with Random Node
Initialization [54.4101931234922]
Graph neural networks (GNNs) are effective models for representation learning on relational data.
Standard GNNs are limited in their expressive power, as they cannot distinguish beyond the capability of the Weisfeiler-Leman graph isomorphism.
In this work, we analyze the expressive power of GNNs with random node (RNI)
We prove that these models are universal, a first such result for GNNs not relying on computationally demanding higher-order properties.
arXiv Detail & Related papers (2020-10-02T19:53:05Z) - Simplifying Architecture Search for Graph Neural Network [38.45540097927176]
We propose SNAG framework, consisting of a novel search space and a reinforcement learning based search algorithm.
Experiments on real-world datasets demonstrate the effectiveness of SNAG framework compared to human-designed GNNs and NAS methods.
arXiv Detail & Related papers (2020-08-26T16:24:03Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.