The Case for Co-Designing Model Architectures with Hardware
- URL: http://arxiv.org/abs/2401.14489v2
- Date: Tue, 30 Jan 2024 21:26:09 GMT
- Title: The Case for Co-Designing Model Architectures with Hardware
- Authors: Quentin Anthony, Jacob Hatef, Deepak Narayanan, Stella Biderman, Stas
Bekman, Junqi Yin, Aamir Shafi, Hari Subramoni, Dhabaleswar Panda
- Abstract summary: We provide a set of guidelines for users to maximize the runtime performance of their transformer models.
We find the throughput of models with efficient model shapes is up to 39% higher.
- Score: 13.022505733049597
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While GPUs are responsible for training the vast majority of state-of-the-art
deep learning models, the implications of their architecture are often
overlooked when designing new deep learning (DL) models. As a consequence,
modifying a DL model to be more amenable to the target hardware can
significantly improve the runtime performance of DL training and inference. In
this paper, we provide a set of guidelines for users to maximize the runtime
performance of their transformer models. These guidelines have been created by
carefully considering the impact of various model hyperparameters controlling
model shape on the efficiency of the underlying computation kernels executed on
the GPU. We find the throughput of models with efficient model shapes is up to
39\% higher while preserving accuracy compared to models with a similar number
of parameters but with unoptimized shapes.
Related papers
- Exploring the design space of deep-learning-based weather forecasting systems [56.129148006412855]
This paper systematically analyzes the impact of different design choices on deep-learning-based weather forecasting systems.
We study fixed-grid architectures such as UNet, fully convolutional architectures, and transformer-based models.
We propose a hybrid system that combines the strong performance of fixed-grid models with the flexibility of grid-invariant architectures.
arXiv Detail & Related papers (2024-10-09T22:25:50Z) - Scaling Pre-trained Language Models to Deeper via Parameter-efficient
Architecture [68.13678918660872]
We design a more capable parameter-sharing architecture based on matrix product operator (MPO)
MPO decomposition can reorganize and factorize the information of a parameter matrix into two parts.
Our architecture shares the central tensor across all layers for reducing the model size.
arXiv Detail & Related papers (2023-03-27T02:34:09Z) - Slapo: A Schedule Language for Progressive Optimization of Large Deep
Learning Model Training [17.556432199389615]
Slapo is a schedule language that decouples the execution of a tensor-level operator from its arithmetic definition.
We show that Slapo can improve training throughput by up to 2.92x on a single machine with 8 NVIDIA V100 GPUs.
arXiv Detail & Related papers (2023-02-16T00:34:53Z) - Re-parameterizing Your Optimizers rather than Architectures [119.08740698936633]
We propose a novel paradigm of incorporating model-specific prior knowledge into Structurals and using them to train generic (simple) models.
As an implementation, we propose a novel methodology to add prior knowledge by modifying the gradients according to a set of model-specific hyper- parameters.
For a simple model trained with a Repr, we focus on a VGG-style plain model and showcase that such a simple model trained with a Repr, which is referred to as Rep-VGG, performs on par with the recent well-designed models.
arXiv Detail & Related papers (2022-05-30T16:55:59Z) - Efficient Deep Learning Methods for Identification of Defective Casting
Products [0.0]
In this paper, we have compared and contrasted various pre-trained and custom-built AI architectures.
Our results show that custom architectures are efficient than pre-trained mobile architectures.
Augmentation experimentations have also been carried out on the custom architectures to make the models more robust and generalizable.
arXiv Detail & Related papers (2022-05-14T19:35:05Z) - DST: Dynamic Substitute Training for Data-free Black-box Attack [79.61601742693713]
We propose a novel dynamic substitute training attack method to encourage substitute model to learn better and faster from the target model.
We introduce a task-driven graph-based structure information learning constrain to improve the quality of generated training data.
arXiv Detail & Related papers (2022-04-03T02:29:11Z) - STAR: Sparse Transformer-based Action Recognition [61.490243467748314]
This work proposes a novel skeleton-based human action recognition model with sparse attention on the spatial dimension and segmented linear attention on the temporal dimension of data.
Experiments show that our model can achieve comparable performance while utilizing much less trainable parameters and achieve high speed in training and inference.
arXiv Detail & Related papers (2021-07-15T02:53:11Z) - Understanding Training Efficiency of Deep Learning Recommendation Models
at Scale [8.731263641794897]
This paper explains the intricacies of using GPUs for training recommendation models.
factors affecting hardware efficiency at scale, and learnings from a new scale-up GPU server design, Zion.
arXiv Detail & Related papers (2020-11-11T01:21:43Z) - A Learned Performance Model for Tensor Processing Units [5.733911161090224]
We demonstrate a method of learning performance models from a corpus of graph programs for Processing Unit (TPU) instances.
We show that our learned model outperforms a heavily-optimized analytical performance model on two tasks.
It helps an autotuner discover faster programs in a setting where access to TPUs is limited or expensive.
arXiv Detail & Related papers (2020-08-03T17:24:52Z) - Model Reuse with Reduced Kernel Mean Embedding Specification [70.044322798187]
We present a two-phase framework for finding helpful models for a current application.
In the upload phase, when a model is uploading into the pool, we construct a reduced kernel mean embedding (RKME) as a specification for the model.
Then in the deployment phase, the relatedness of the current task and pre-trained models will be measured based on the value of the RKME specification.
arXiv Detail & Related papers (2020-01-20T15:15:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.