Towards Lightweight Graph Neural Network Search with Curriculum Graph Sparsification
- URL: http://arxiv.org/abs/2406.16357v1
- Date: Mon, 24 Jun 2024 06:53:37 GMT
- Title: Towards Lightweight Graph Neural Network Search with Curriculum Graph Sparsification
- Authors: Beini Xie, Heng Chang, Ziwei Zhang, Zeyang Zhang, Simin Wu, Xin Wang, Yuan Meng, Wenwu Zhu,
- Abstract summary: This paper proposes to design a joint graph data and architecture mechanism, which identifies important sub-architectures via the valuable graph data.
To search for optimal lightweight Graph Neural Networks (GNNs), we propose a Lightweight Graph Neural Architecture Search with Graph SparsIfication and Network Pruning (GASSIP) method.
Our method achieves on-par or even higher node classification performance with half or fewer model parameters of searched GNNs and a sparser graph.
- Score: 48.334100429553644
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Graph Neural Architecture Search (GNAS) has achieved superior performance on various graph-structured tasks. However, existing GNAS studies overlook the applications of GNAS in resource-constraint scenarios. This paper proposes to design a joint graph data and architecture mechanism, which identifies important sub-architectures via the valuable graph data. To search for optimal lightweight Graph Neural Networks (GNNs), we propose a Lightweight Graph Neural Architecture Search with Graph SparsIfication and Network Pruning (GASSIP) method. In particular, GASSIP comprises an operation-pruned architecture search module to enable efficient lightweight GNN search. Meanwhile, we design a novel curriculum graph data sparsification module with an architecture-aware edge-removing difficulty measurement to help select optimal sub-architectures. With the aid of two differentiable masks, we iteratively optimize these two modules to efficiently search for the optimal lightweight architecture. Extensive experiments on five benchmarks demonstrate the effectiveness of GASSIP. Particularly, our method achieves on-par or even higher node classification performance with half or fewer model parameters of searched GNNs and a sparser graph.
Related papers
- Efficient and Explainable Graph Neural Architecture Search via
Monte-Carlo Tree Search [5.076419064097733]
Graph neural networks (GNNs) are powerful tools for performing data science tasks in various domains.
To save human efforts and computational costs, graph neural architecture search (Graph NAS) has been used to search for a sub-optimal GNN architecture.
We propose ExGNAS, which consists of (i) a simple search space that can adapt to various graphs and (ii) a search algorithm that makes the decision process explainable.
arXiv Detail & Related papers (2023-08-30T03:21:45Z) - NAS-Bench-Graph: Benchmarking Graph Neural Architecture Search [55.75621026447599]
We propose NAS-Bench-Graph, a tailored benchmark that supports unified, reproducible, and efficient evaluations for GraphNAS.
Specifically, we construct a unified, expressive yet compact search space, covering 26,206 unique graph neural network (GNN) architectures.
Based on our proposed benchmark, the performance of GNN architectures can be directly obtained by a look-up table without any further computation.
arXiv Detail & Related papers (2022-06-18T10:17:15Z) - Automatic Relation-aware Graph Network Proliferation [182.30735195376792]
We propose Automatic Relation-aware Graph Network Proliferation (ARGNP) for efficiently searching GNNs.
These operations can extract hierarchical node/relational information and provide anisotropic guidance for message passing on a graph.
Experiments on six datasets for four graph learning tasks demonstrate that GNNs produced by our method are superior to the current state-of-the-art hand-crafted and search-based GNNs.
arXiv Detail & Related papers (2022-05-31T10:38:04Z) - GraphPAS: Parallel Architecture Search for Graph Neural Networks [12.860313120881996]
We propose a parallel graph architecture search (GraphPAS) framework for graph neural networks.
In GraphPAS, we explore the search space in parallel by designing a sharing-based evolution learning.
The experimental result shows that GraphPAS outperforms state-of-art models with efficiency and accuracy simultaneously.
arXiv Detail & Related papers (2021-12-07T02:55:24Z) - ACE-HGNN: Adaptive Curvature Exploration Hyperbolic Graph Neural Network [72.16255675586089]
We propose an Adaptive Curvature Exploration Hyperbolic Graph NeuralNetwork named ACE-HGNN to adaptively learn the optimal curvature according to the input graph and downstream tasks.
Experiments on multiple real-world graph datasets demonstrate a significant and consistent performance improvement in model quality with competitive performance and good generalization ability.
arXiv Detail & Related papers (2021-10-15T07:18:57Z) - Edge-featured Graph Neural Architecture Search [131.4361207769865]
We propose Edge-featured Graph Neural Architecture Search to find the optimal GNN architecture.
Specifically, we design rich entity and edge updating operations to learn high-order representations.
We show EGNAS can search better GNNs with higher performance than current state-of-the-art human-designed and searched-based GNNs.
arXiv Detail & Related papers (2021-09-03T07:53:18Z) - Pooling Architecture Search for Graph Classification [36.728077433219916]
Graph neural networks (GNNs) are designed to learn node-level representation based on neighborhood aggregation schemes.
Pooling methods are applied after the aggregation operation to generate coarse-grained graphs.
It is a challenging problem to design a universal pooling architecture to perform well in most cases.
We propose to use neural architecture search (NAS) to search for adaptive pooling architectures for graph classification.
arXiv Detail & Related papers (2021-08-24T09:03:03Z) - Rethinking Graph Neural Network Search from Message-passing [120.62373472087651]
This paper proposes Graph Neural Architecture Search (GNAS) with novel-designed search space.
We design Graph Neural Architecture Paradigm (GAP) with tree-topology computation procedure and two types of fine-grained atomic operations.
Experiments show that our GNAS can search for better GNNs with multiple message-passing mechanisms and optimal message-passing depth.
arXiv Detail & Related papers (2021-03-26T06:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.