MaGNAS: A Mapping-Aware Graph Neural Architecture Search Framework for
Heterogeneous MPSoC Deployment
- URL: http://arxiv.org/abs/2307.08065v1
- Date: Sun, 16 Jul 2023 14:56:50 GMT
- Title: MaGNAS: A Mapping-Aware Graph Neural Architecture Search Framework for
Heterogeneous MPSoC Deployment
- Authors: Mohanad Odema, Halima Bouzidi, Hamza Ouarnoughi, Smail Niar, Mohammad
Abdullah Al Faruque
- Abstract summary: We propose a novel unified design-mapping approach for efficient processing of vision GNN workloads on heterogeneous MPSoC platforms.
MaGNAS employs a two-tier evolutionary search to identify optimal GNNs and mapping pairings that yield the best performance trade-offs.
Our experimental results demonstrate that MaGNAS is able to provide 1.57x latency speedup and is 3.38x more energy-efficient for several vision datasets executed on the Xavier MPSoC vs. the GPU-only deployment.
- Score: 8.29394286023338
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) are becoming increasingly popular for
vision-based applications due to their intrinsic capacity in modeling
structural and contextual relations between various parts of an image frame. On
another front, the rising popularity of deep vision-based applications at the
edge has been facilitated by the recent advancements in heterogeneous
multi-processor Systems on Chips (MPSoCs) that enable inference under
real-time, stringent execution requirements. By extension, GNNs employed for
vision-based applications must adhere to the same execution requirements. Yet
contrary to typical deep neural networks, the irregular flow of graph learning
operations poses a challenge to running GNNs on such heterogeneous MPSoC
platforms. In this paper, we propose a novel unified design-mapping approach
for efficient processing of vision GNN workloads on heterogeneous MPSoC
platforms. Particularly, we develop MaGNAS, a mapping-aware Graph Neural
Architecture Search framework. MaGNAS proposes a GNN architectural design space
coupled with prospective mapping options on a heterogeneous SoC to identify
model architectures that maximize on-device resource efficiency. To achieve
this, MaGNAS employs a two-tier evolutionary search to identify optimal GNNs
and mapping pairings that yield the best performance trade-offs. Through
designing a supernet derived from the recent Vision GNN (ViG) architecture, we
conducted experiments on four (04) state-of-the-art vision datasets using both
(i) a real hardware SoC platform (NVIDIA Xavier AGX) and (ii) a
performance/cost model simulator for DNN accelerators. Our experimental results
demonstrate that MaGNAS is able to provide 1.57x latency speedup and is 3.38x
more energy-efficient for several vision datasets executed on the Xavier MPSoC
vs. the GPU-only deployment while sustaining an average 0.11% accuracy
reduction from the baseline.
Related papers
- HGNAS: Hardware-Aware Graph Neural Architecture Search for Edge Devices [11.1990060370675]
This work proposes a novel hardware-aware graph neural architecture search framework tailored for resource constraint edge devices, namely HGNAS.
HGNAS integrates an efficient GNN hardware performance predictor that evaluates the latency and peak memory usage of GNNs in milliseconds.
It can achieve up to a 10.6x speedup and an 82.5% peak memory reduction with negligible accuracy loss compared to DGCNN on ModelNet40.
arXiv Detail & Related papers (2024-08-23T05:11:22Z) - GHOST: A Graph Neural Network Accelerator using Silicon Photonics [4.226093500082746]
Graph neural networks (GNNs) have emerged as a powerful approach for modelling and learning from graph-structured data.
We present GHOST, the first silicon-photonic hardware accelerator for GNNs.
arXiv Detail & Related papers (2023-07-04T15:37:20Z) - Hardware-Aware Graph Neural Network Automated Design for Edge Computing
Platforms [9.345807588929734]
HGNAS is proposed as the first Hardware-aware Graph Neural Architecture Search framework targeting resource constraint edge devices.
Results show that HGNAS can achieve about $10.6times$ speedup and $88.2%$ peak memory reduction with a negligible accuracy loss compared to DGCNN on various edge devices.
arXiv Detail & Related papers (2023-03-20T05:18:31Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - FlowGNN: A Dataflow Architecture for Universal Graph Neural Network
Inference via Multi-Queue Streaming [1.566528527065232]
Graph neural networks (GNNs) have recently exploded in popularity thanks to their broad applicability to graph-related problems.
Meeting demand for novel GNN models and fast inference simultaneously is challenging because of the gap between developing efficient accelerators and the rapid creation of new GNN models.
We propose a generic dataflow architecture for GNN acceleration, named FlowGNN, which can flexibly support the majority of message-passing GNNs.
arXiv Detail & Related papers (2022-04-27T17:59:25Z) - PaSca: a Graph Neural Architecture Search System under the Scalable
Paradigm [24.294196319217907]
Graph neural networks (GNNs) have achieved state-of-the-art performance in various graph-based tasks.
However, GNNs do not scale well to data size and message passing steps.
This paper proposes PasCa, a new paradigm and system that offers a principled approach to systemically construct and explore the design space for scalable GNNs.
arXiv Detail & Related papers (2022-03-01T17:26:50Z) - Space4HGNN: A Novel, Modularized and Reproducible Platform to Evaluate
Heterogeneous Graph Neural Network [51.07168862821267]
We propose a unified framework covering most HGNNs, consisting of three components: heterogeneous linear transformation, heterogeneous graph transformation, and heterogeneous message passing layer.
We then build a platform Space4HGNN by defining a design space for HGNNs based on the unified framework, which offers modularized components, reproducible implementations, and standardized evaluation for HGNNs.
arXiv Detail & Related papers (2022-02-18T13:11:35Z) - ACE-HGNN: Adaptive Curvature Exploration Hyperbolic Graph Neural Network [72.16255675586089]
We propose an Adaptive Curvature Exploration Hyperbolic Graph NeuralNetwork named ACE-HGNN to adaptively learn the optimal curvature according to the input graph and downstream tasks.
Experiments on multiple real-world graph datasets demonstrate a significant and consistent performance improvement in model quality with competitive performance and good generalization ability.
arXiv Detail & Related papers (2021-10-15T07:18:57Z) - Edge-featured Graph Neural Architecture Search [131.4361207769865]
We propose Edge-featured Graph Neural Architecture Search to find the optimal GNN architecture.
Specifically, we design rich entity and edge updating operations to learn high-order representations.
We show EGNAS can search better GNNs with higher performance than current state-of-the-art human-designed and searched-based GNNs.
arXiv Detail & Related papers (2021-09-03T07:53:18Z) - Rethinking Graph Neural Network Search from Message-passing [120.62373472087651]
This paper proposes Graph Neural Architecture Search (GNAS) with novel-designed search space.
We design Graph Neural Architecture Paradigm (GAP) with tree-topology computation procedure and two types of fine-grained atomic operations.
Experiments show that our GNAS can search for better GNNs with multiple message-passing mechanisms and optimal message-passing depth.
arXiv Detail & Related papers (2021-03-26T06:10:41Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.