Dynamic Transformer for Efficient Machine Translation on Embedded
Devices
- URL: http://arxiv.org/abs/2107.08199v1
- Date: Sat, 17 Jul 2021 07:36:29 GMT
- Title: Dynamic Transformer for Efficient Machine Translation on Embedded
Devices
- Authors: Hishan Parry, Lei Xun, Amin Sabet, Jia Bi, Jonathon Hare, Geoff V.
Merrett
- Abstract summary: We propose a machine translation model that scales the Transformer architecture based on the available resources at any particular time.
The proposed approach, 'Dynamic-HAT', uses a HAT SuperTransformer as the backbone to search for SubTransformers with different accuracy-latency trade-offs.
The Dynamic-HAT is tested on the Jetson Nano and the approach uses inherited SubTransformers sampled directly from the SuperTransformer with a switching time of 1s.
- Score: 0.9786690381850356
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Transformer architecture is widely used for machine translation tasks.
However, its resource-intensive nature makes it challenging to implement on
constrained embedded devices, particularly where available hardware resources
can vary at run-time. We propose a dynamic machine translation model that
scales the Transformer architecture based on the available resources at any
particular time. The proposed approach, 'Dynamic-HAT', uses a HAT
SuperTransformer as the backbone to search for SubTransformers with different
accuracy-latency trade-offs at design time. The optimal SubTransformers are
sampled from the SuperTransformer at run-time, depending on latency
constraints. The Dynamic-HAT is tested on the Jetson Nano and the approach uses
inherited SubTransformers sampled directly from the SuperTransformer with a
switching time of <1s. Using inherited SubTransformers results in a BLEU score
loss of <1.5% because the SubTransformer configuration is not retrained from
scratch after sampling. However, to recover this loss in performance, the
dimensions of the design space can be reduced to tailor it to a family of
target hardware. The new reduced design space results in a BLEU score increase
of approximately 1% for sub-optimal models from the original design space, with
a wide range for performance scaling between 0.356s - 1.526s for the GPU and
2.9s - 7.31s for the CPU.
Related papers
- Dynamic Diffusion Transformer [67.13876021157887]
Diffusion Transformer (DiT) has demonstrated superior performance but suffers from substantial computational costs.
We propose Dynamic Diffusion Transformer (DyDiT), an architecture that dynamically adjusts its computation along both timestep and spatial dimensions during generation.
With 3% additional fine-tuning, our method reduces the FLOPs of DiT-XL by 51%, accelerates generation by 1.73, and achieves a competitive FID score of 2.07 on ImageNet.
arXiv Detail & Related papers (2024-10-04T14:14:28Z) - iTransformer: Inverted Transformers Are Effective for Time Series Forecasting [62.40166958002558]
We propose iTransformer, which simply applies the attention and feed-forward network on the inverted dimensions.
The iTransformer model achieves state-of-the-art on challenging real-world datasets.
arXiv Detail & Related papers (2023-10-10T13:44:09Z) - Isomer: Isomerous Transformer for Zero-shot Video Object Segmentation [59.91357714415056]
We propose two Transformer variants: Context-Sharing Transformer (CST) and Semantic Gathering-Scattering Transformer (S GST)
CST learns the global-shared contextual information within image frames with a lightweight computation; S GST models the semantic correlation separately for the foreground and background.
Compared with the baseline that uses vanilla Transformers for multi-stage fusion, ours significantly increase the speed by 13 times and achieves new state-of-the-art ZVOS performance.
arXiv Detail & Related papers (2023-08-13T06:12:00Z) - ViTA: A Vision Transformer Inference Accelerator for Edge Applications [4.3469216446051995]
Vision Transformer models, such as ViT, Swin Transformer, and Transformer-in-Transformer, have recently gained significant traction in computer vision tasks.
They are compute-heavy and difficult to deploy in resource-constrained edge devices.
We propose ViTA - a hardware accelerator for inference of vision transformer models, targeting resource-constrained edge computing devices.
arXiv Detail & Related papers (2023-02-17T19:35:36Z) - ByteTransformer: A High-Performance Transformer Boosted for
Variable-Length Inputs [6.9136984255301]
We present ByteTransformer, a high-performance transformer boosted for variable-length inputs.
ByteTransformer surpasses the state-of-the-art Transformer frameworks, such as PyTorch JIT, XLA, Tencent TurboTransformer and NVIDIA FasterTransformer.
arXiv Detail & Related papers (2022-10-06T16:57:23Z) - HiViT: Hierarchical Vision Transformer Meets Masked Image Modeling [126.89573619301953]
We propose a new design of hierarchical vision transformers named HiViT (short for Hierarchical ViT)
HiViT enjoys both high efficiency and good performance in MIM.
In running MAE on ImageNet-1K, HiViT-B reports a +0.6% accuracy gain over ViT-B and a 1.9$times$ speed-up over Swin-B.
arXiv Detail & Related papers (2022-05-30T09:34:44Z) - Towards Lightweight Transformer via Group-wise Transformation for
Vision-and-Language Tasks [126.33843752332139]
We introduce Group-wise Transformation towards a universal yet lightweight Transformer for vision-and-language tasks, termed as LW-Transformer.
We apply LW-Transformer to a set of Transformer-based networks, and quantitatively measure them on three vision-and-language tasks and six benchmark datasets.
Experimental results show that while saving a large number of parameters and computations, LW-Transformer achieves very competitive performance against the original Transformer networks for vision-and-language tasks.
arXiv Detail & Related papers (2022-04-16T11:30:26Z) - Sparse is Enough in Scaling Transformers [12.561317511514469]
Large Transformer models yield impressive results on many tasks, but are expensive to train, or even fine-tune, and so slow at decoding that their use and study becomes out of reach.
We propose Scaling Transformers, a family of next generation Transformer models that use sparse layers to scale efficiently and perform unbatched decoding much faster than the standard Transformer.
arXiv Detail & Related papers (2021-11-24T19:53:46Z) - Lite Transformer with Long-Short Range Attention [31.946796118788285]
We present an efficient mobile NLP architecture, Lite Transformer, to facilitate deploying mobile NLP applications on edge devices.
Lite Transformer outperforms transformer on WMT'14 English-French by 1.2/1.7 BLEU under constrained resources.
Notably, Lite Transformer outperforms the AutoML-based Evolved Transformer by 0.5 higher BLEU for the mobile NLP setting without the costly architecture search that requires more than 250 GPU years.
arXiv Detail & Related papers (2020-04-24T17:52:25Z) - Transformer on a Diet [81.09119185568296]
Transformer has been widely used thanks to its ability to capture sequence information in an efficient way.
Recent developments, such as BERT and GPT-2, deliver only heavy architectures with a focus on effectiveness.
We explore three carefully-designed light Transformer architectures to figure out whether the Transformer with less computations could produce competitive results.
arXiv Detail & Related papers (2020-02-14T18:41:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.