Learning Accurate Integer Transformer Machine-Translation Models
- URL: http://arxiv.org/abs/2001.00926v1
- Date: Fri, 3 Jan 2020 18:40:35 GMT
- Title: Learning Accurate Integer Transformer Machine-Translation Models
- Authors: Ephrem Wu
- Abstract summary: We describe a method for training accurate Transformer machine-translation models to run inference using 8-bit integer (INT8) hardware matrix multipliers.
Our approach converts all matrix-multiplication tensors from an existing FP32 model into INT8 tensors by automatically making range-precision trade-offs during training.
- Score: 0.05184427980355132
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We describe a method for training accurate Transformer machine-translation
models to run inference using 8-bit integer (INT8) hardware matrix multipliers,
as opposed to the more costly single-precision floating-point (FP32) hardware.
Unlike previous work, which converted only 85 Transformer matrix
multiplications to INT8, leaving 48 out of 133 of them in FP32 because of
unacceptable accuracy loss, we convert them all to INT8 without compromising
accuracy. Tested on the newstest2014 English-to-German translation task, our
INT8 Transformer Base and Transformer Big models yield BLEU scores that are
99.3% to 100% relative to those of the corresponding FP32 models. Our approach
converts all matrix-multiplication tensors from an existing FP32 model into
INT8 tensors by automatically making range-precision trade-offs during
training. To demonstrate the robustness of this approach, we also include
results from INT6 Transformer models.
Related papers
- Genetic Quantization-Aware Approximation for Non-Linear Operations in Transformers [26.62171477561166]
Non-linear functions are prevalent in Transformers and their lightweight variants, incurring substantial and frequently underestimated hardware costs.
Previous state-of-the-art works optimize these operations by piece-wise linear approximation and store the parameters in look-up tables (LUT)
This paper proposed a genetic LUT-Approximation algorithm namely GQA-LUT that can automatically determine the parameters with quantization awareness.
arXiv Detail & Related papers (2024-03-28T17:13:47Z) - Jetfire: Efficient and Accurate Transformer Pretraining with INT8 Data Flow and Per-Block Quantization [19.414153137453415]
We propose Jetfire, an efficient and accurate INT8 training method specific to transformers.
Our method features an INT8 data flow to optimize memory access and a per-block quantization method to maintain the accuracy of pretrained transformers.
For a standard transformer block, our method offers an end-to-end training speedup of 1.42x and a 1.49x memory reduction compared to the FP16 baseline.
arXiv Detail & Related papers (2024-03-19T04:09:11Z) - FP8-BERT: Post-Training Quantization for Transformer [20.51143486483669]
Transformer-based models, such as BERT, require massive memory storage and inference cost when deployed in production.
New numeric format FP8 has been proposed and supported in commercial AI computing platforms such as H100.
We empirically validate the effectiveness of FP8 as a way to do Post-Training Quantization without significant loss of accuracy.
arXiv Detail & Related papers (2023-12-10T02:14:34Z) - FP8-LM: Training FP8 Large Language Models [47.17804713425323]
In this paper, we propose a new FP8 automatic mixed-precision framework for training large language models.
Experiment results show that, during the training of GPT-175B model on H100 GPU platform, our FP8 mixed-precision training framework not only achieved a remarkable 39% reduction in real memory usage but also ran 75% faster than the widely adopted BF16 framework.
arXiv Detail & Related papers (2023-10-27T17:59:51Z) - FP8 Formats for Deep Learning [49.54015320992368]
We propose an 8-bit floating point (FP8) binary interchange format consisting of two encodings.
E4M3's dynamic range is extended by not representing infinities and having only one mantissa bit-pattern for NaNs.
We demonstrate the efficacy of the FP8 format on a variety of image and language tasks, effectively matching the result quality achieved by 16-bit training sessions.
arXiv Detail & Related papers (2022-09-12T17:39:55Z) - LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale [80.86029795281922]
We develop a procedure for Int8 matrix multiplication for feed-forward and attention projection layers in transformers.
A 175B parameter 16/32-bit checkpoint can be loaded, converted to Int8, and used immediately without performance degradation.
arXiv Detail & Related papers (2022-08-15T17:08:50Z) - Bayesian Transformer Language Models for Speech Recognition [59.235405107295655]
State-of-the-art neural language models (LMs) represented by Transformers are highly complex.
This paper proposes a full Bayesian learning framework for Transformer LM estimation.
arXiv Detail & Related papers (2021-02-09T10:55:27Z) - I-BERT: Integer-only BERT Quantization [78.43819756382103]
We propose I-BERT, a novel quantization scheme for Transformer based models.
I-BERT performs an end-to-end integer-only BERT inference without any floating point calculation.
We show that for both cases, I-BERT achieves similar (and slightly higher) accuracy as compared to the full-precision baseline.
arXiv Detail & Related papers (2021-01-05T02:42:58Z) - Towards Fully 8-bit Integer Inference for the Transformer Model [39.22272841663168]
We show that after a principled modification on the Transformer architecture, dubbed Transformer, an (almost) fully 8-bit integer inference algorithm could be derived.
Our experiments on WMT16 En->Ro, WMT14 En->De and En->Fr translation tasks as well as the WikiText-103 language modelling task show that the fully 8-bit Transformer system achieves comparable performance with the floating point baseline but requires nearly 4x less memory footprint.
arXiv Detail & Related papers (2020-09-17T03:09:10Z) - Segatron: Segment-Aware Transformer for Language Modeling and
Understanding [79.84562707201323]
We propose a segment-aware Transformer (Segatron) to generate better contextual representations from sequential tokens.
We first introduce the segment-aware mechanism to Transformer-XL, which is a popular Transformer-based language model.
We find that our method can further improve the Transformer-XL base model and large model, achieving 17.1 perplexity on the WikiText-103 dataset.
arXiv Detail & Related papers (2020-04-30T17:38:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.