PaSca: a Graph Neural Architecture Search System under the Scalable
Paradigm
- URL: http://arxiv.org/abs/2203.00638v1
- Date: Tue, 1 Mar 2022 17:26:50 GMT
- Title: PaSca: a Graph Neural Architecture Search System under the Scalable
Paradigm
- Authors: Wentao Zhang, Yu Shen, Zheyu Lin, Yang Li, Xiaosen Li, Wen Ouyang,
Yangyu Tao, Zhi Yang, Bin Cui
- Abstract summary: Graph neural networks (GNNs) have achieved state-of-the-art performance in various graph-based tasks.
However, GNNs do not scale well to data size and message passing steps.
This paper proposes PasCa, a new paradigm and system that offers a principled approach to systemically construct and explore the design space for scalable GNNs.
- Score: 24.294196319217907
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) have achieved state-of-the-art performance in
various graph-based tasks. However, as mainstream GNNs are designed based on
the neural message passing mechanism, they do not scale well to data size and
message passing steps. Although there has been an emerging interest in the
design of scalable GNNs, current researches focus on specific GNN design,
rather than the general design space, limiting the discovery of potential
scalable GNN models. This paper proposes PasCa, a new paradigm and system that
offers a principled approach to systemically construct and explore the design
space for scalable GNNs, rather than studying individual designs. Through
deconstructing the message passing mechanism, PasCa presents a novel Scalable
Graph Neural Architecture Paradigm (SGAP), together with a general architecture
design space consisting of 150k different designs. Following the paradigm, we
implement an auto-search engine that can automatically search well-performing
and scalable GNN architectures to balance the trade-off between multiple
criteria (e.g., accuracy and efficiency) via multi-objective optimization.
Empirical studies on ten benchmark datasets demonstrate that the representative
instances (i.e., PasCa-V1, V2, and V3) discovered by our system achieve
consistent performance among competitive baselines. Concretely, PasCa-V3
outperforms the state-of-the-art GNN method JK-Net by 0.4\% in terms of
predictive accuracy on our large industry dataset while achieving up to
$28.3\times$ training speedups.
Related papers
- MaGNAS: A Mapping-Aware Graph Neural Architecture Search Framework for
Heterogeneous MPSoC Deployment [8.29394286023338]
We propose a novel unified design-mapping approach for efficient processing of vision GNN workloads on heterogeneous MPSoC platforms.
MaGNAS employs a two-tier evolutionary search to identify optimal GNNs and mapping pairings that yield the best performance trade-offs.
Our experimental results demonstrate that MaGNAS is able to provide 1.57x latency speedup and is 3.38x more energy-efficient for several vision datasets executed on the Xavier MPSoC vs. the GPU-only deployment.
arXiv Detail & Related papers (2023-07-16T14:56:50Z) - GNN at the Edge: Cost-Efficient Graph Neural Network Processing over
Distributed Edge Servers [24.109721494781592]
Graph Neural Networks (GNNs) are still under exploration, presenting a stark disparity to its broad edge adoptions.
This paper studies the cost optimization for distributed GNN processing over a multi-tier heterogeneous edge network.
We show that our approach achieves superior performance over de facto baselines with more than 95.8% cost eduction in a fast convergence speed.
arXiv Detail & Related papers (2022-10-31T13:03:16Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Space4HGNN: A Novel, Modularized and Reproducible Platform to Evaluate
Heterogeneous Graph Neural Network [51.07168862821267]
We propose a unified framework covering most HGNNs, consisting of three components: heterogeneous linear transformation, heterogeneous graph transformation, and heterogeneous message passing layer.
We then build a platform Space4HGNN by defining a design space for HGNNs based on the unified framework, which offers modularized components, reproducible implementations, and standardized evaluation for HGNNs.
arXiv Detail & Related papers (2022-02-18T13:11:35Z) - Edge-featured Graph Neural Architecture Search [131.4361207769865]
We propose Edge-featured Graph Neural Architecture Search to find the optimal GNN architecture.
Specifically, we design rich entity and edge updating operations to learn high-order representations.
We show EGNAS can search better GNNs with higher performance than current state-of-the-art human-designed and searched-based GNNs.
arXiv Detail & Related papers (2021-09-03T07:53:18Z) - Rethinking Graph Neural Network Search from Message-passing [120.62373472087651]
This paper proposes Graph Neural Architecture Search (GNAS) with novel-designed search space.
We design Graph Neural Architecture Paradigm (GAP) with tree-topology computation procedure and two types of fine-grained atomic operations.
Experiments show that our GNAS can search for better GNNs with multiple message-passing mechanisms and optimal message-passing depth.
arXiv Detail & Related papers (2021-03-26T06:10:41Z) - Design Space for Graph Neural Networks [81.88707703106232]
We study the architectural design space for Graph Neural Networks (GNNs) which consists of 315,000 different designs over 32 different predictive tasks.
Our key results include: (1) A comprehensive set of guidelines for designing well-performing GNNs; (2) while best GNN designs for different tasks vary significantly, the GNN task space allows for transferring the best designs across different tasks; (3) models discovered using our design space achieve state-of-the-art performance.
arXiv Detail & Related papers (2020-11-17T18:59:27Z) - Benchmarking Graph Neural Networks [75.42159546060509]
Graph neural networks (GNNs) have become the standard toolkit for analyzing and learning from data on graphs.
For any successful field to become mainstream and reliable, benchmarks must be developed to quantify progress.
GitHub repository has reached 1,800 stars and 339 forks, which demonstrates the utility of the proposed open-source framework.
arXiv Detail & Related papers (2020-03-02T15:58:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.