GRANITE: A Graph Neural Network Model for Basic Block Throughput
Estimation
- URL: http://arxiv.org/abs/2210.03894v2
- Date: Tue, 11 Oct 2022 02:01:49 GMT
- Title: GRANITE: A Graph Neural Network Model for Basic Block Throughput
Estimation
- Authors: Ondrej Sykora and Phitchaya Mangpo Phothilimthana and Charith Mendis
and Amir Yazdanbakhsh
- Abstract summary: We introduce a new machine learning model that estimates throughput of basic blocks across different microarchitectures.
Results establish a new state-of-the-art for basic block performance estimation with an average test error of 6.9%.
We propose the use of multi-task learning with independent multi-layer feed forward decoder networks.
- Score: 3.739243122393041
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Analytical hardware performance models yield swift estimation of desired
hardware performance metrics. However, developing these analytical models for
modern processors with sophisticated microarchitectures is an extremely
laborious task and requires a firm understanding of target microarchitecture's
internal structure. In this paper, we introduce GRANITE, a new machine learning
model that estimates the throughput of basic blocks across different
microarchitectures. GRANITE uses a graph representation of basic blocks that
captures both structural and data dependencies between instructions. This
representation is processed using a graph neural network that takes advantage
of the relational information captured in the graph and learns a rich neural
representation of the basic block that allows more precise throughput
estimation. Our results establish a new state-of-the-art for basic block
performance estimation with an average test error of 6.9% across a wide range
of basic blocks and microarchitectures for the x86-64 target. Compared to
recent work, this reduced the error by 1.7% while improving training and
inference throughput by approximately 3.0x. In addition, we propose the use of
multi-task learning with independent multi-layer feed forward decoder networks.
Our results show that this technique further improves precision of all learned
models while significantly reducing per-microarchitecture training costs. We
perform an extensive set of ablation studies and comparisons with prior work,
concluding a set of methods to achieve high accuracy for basic block
performance estimation.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Multi-conditioned Graph Diffusion for Neural Architecture Search [8.290336491323796]
We present a graph diffusion-based NAS approach that uses discrete conditional graph diffusion processes to generate high-performing neural network architectures.
We show promising results on six standard benchmarks, yielding novel and unique architectures at a fast speed.
arXiv Detail & Related papers (2024-03-09T21:45:31Z) - FlowNAS: Neural Architecture Search for Optical Flow Estimation [65.44079917247369]
We propose a neural architecture search method named FlowNAS to automatically find the better encoder architecture for flow estimation task.
Experimental results show that the discovered architecture with the weights inherited from the super-network achieves 4.67% F1-all error on KITTI.
arXiv Detail & Related papers (2022-07-04T09:05:25Z) - Precise Learning of Source Code Contextual Semantics via Hierarchical
Dependence Structure and Graph Attention Networks [28.212889828892664]
We propose a novel source code model embedded with hierarchical dependencies.
We introduce the syntactic structural of the basic block, i.e., its corresponding AST, in source code model to provide sufficient information.
The results show that our model reduces the scale of parameters by 50% and achieves 4% improvement on accuracy on program classification task.
arXiv Detail & Related papers (2021-11-20T04:03:42Z) - Using Graph Neural Networks to model the performance of Deep Neural
Networks [2.1151356984322307]
We develop a novel performance model that adopts a graph representation.
Experimental evaluation shows a 7:75x and 12x reduction in prediction error compared to the Halide and TVM models, respectively.
arXiv Detail & Related papers (2021-08-27T20:20:17Z) - Balancing Accuracy and Latency in Multipath Neural Networks [0.09668407688201358]
We use a one-shot neural architecture search model to implicitly evaluate the performance of an intractable number of neural networks.
We show that our method can accurately model the relative performance between models with different latencies and predict the performance of unseen models with good precision across different datasets.
arXiv Detail & Related papers (2021-04-25T00:05:48Z) - Model-Agnostic Graph Regularization for Few-Shot Learning [60.64531995451357]
We present a comprehensive study on graph embedded few-shot learning.
We introduce a graph regularization approach that allows a deeper understanding of the impact of incorporating graph information between labels.
Our approach improves the performance of strong base learners by up to 2% on Mini-ImageNet and 6.7% on ImageNet-FS.
arXiv Detail & Related papers (2021-02-14T05:28:13Z) - FBNetV3: Joint Architecture-Recipe Search using Predictor Pretraining [65.39532971991778]
We present an accuracy predictor that scores architecture and training recipes jointly, guiding both sample selection and ranking.
We run fast evolutionary searches in just CPU minutes to generate architecture-recipe pairs for a variety of resource constraints.
FBNetV3 makes up a family of state-of-the-art compact neural networks that outperform both automatically and manually-designed competitors.
arXiv Detail & Related papers (2020-06-03T05:20:21Z) - A Semi-Supervised Assessor of Neural Architectures [157.76189339451565]
We employ an auto-encoder to discover meaningful representations of neural architectures.
A graph convolutional neural network is introduced to predict the performance of architectures.
arXiv Detail & Related papers (2020-05-14T09:02:33Z) - Learning What to Learn for Video Object Segmentation [157.4154825304324]
We introduce an end-to-end trainable VOS architecture that integrates a differentiable few-shot learning module.
This internal learner is designed to predict a powerful parametric model of the target.
We set a new state-of-the-art on the large-scale YouTube-VOS 2018 dataset by achieving an overall score of 81.5.
arXiv Detail & Related papers (2020-03-25T17:58:43Z) - Tidying Deep Saliency Prediction Architectures [6.613005108411055]
In this paper, we identify four key components of saliency models, i.e., input features, multi-level integration, readout architecture, and loss functions.
We propose two novel end-to-end architectures called SimpleNet and MDNSal, which are neater, minimal, more interpretable and achieve state of the art performance on public saliency benchmarks.
arXiv Detail & Related papers (2020-03-10T19:34:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.