FP8 versus INT8 for efficient deep learning inference
- URL: http://arxiv.org/abs/2303.17951v2
- Date: Thu, 15 Jun 2023 08:14:02 GMT
- Title: FP8 versus INT8 for efficient deep learning inference
- Authors: Mart van Baalen, Andrey Kuzmin, Suparna S Nair, Yuwei Ren, Eric
Mahurin, Chirag Patel, Sundar Subramanian, Sanghyuk Lee, Markus Nagel, Joseph
Soriaga, Tijmen Blankevoort
- Abstract summary: We compare the performance for both the FP8 and INT formats for efficient on-device inference.
We show that the FP formats are somewhere between 50-180% less efficient in terms of compute in dedicated hardware than the INT format.
We conclude that although the proposed FP8 format could be good for training, the results for inference do not warrant a dedicated implementation of FP8.
- Score: 14.98281493168929
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, the idea of using FP8 as a number format for neural network
training has been floating around the deep learning world. Given that most
training is currently conducted with entire networks in FP32, or sometimes FP16
with mixed-precision, the step to having some parts of a network run in FP8
with 8-bit weights is an appealing potential speed-up for the generally costly
and time-intensive training procedures in deep learning. A natural question
arises regarding what this development means for efficient inference on edge
devices. In the efficient inference device world, workloads are frequently
executed in INT8. Sometimes going even as low as INT4 when efficiency calls for
it. In this whitepaper, we compare the performance for both the FP8 and INT
formats for efficient on-device inference. We theoretically show the difference
between the INT and FP formats for neural networks and present a plethora of
post-training quantization and quantization-aware-training results to show how
this theory translates to practice. We also provide a hardware analysis showing
that the FP formats are somewhere between 50-180% less efficient in terms of
compute in dedicated hardware than the INT format. Based on our research and a
read of the research field, we conclude that although the proposed FP8 format
could be good for training, the results for inference do not warrant a
dedicated implementation of FP8 in favor of INT8 for efficient inference. We
show that our results are mostly consistent with previous findings but that
important comparisons between the formats have thus far been lacking. Finally,
we discuss what happens when FP8-trained networks are converted to INT8 and
conclude with a brief discussion on the most efficient way for on-device
deployment and an extensive suite of INT8 results for many models.
Related papers
- "Give Me BF16 or Give Me Death"? Accuracy-Performance Trade-Offs in LLM Quantization [67.3213104337679]
We evaluate popular quantization formats across academic benchmarks and real-world tasks.
We find that W4A16 offers the best costefficiency for synchronous deployments, and for asynchronous deployment on mid-tier architectures.
arXiv Detail & Related papers (2024-11-04T18:21:59Z) - Towards Federated Learning with On-device Training and Communication in 8-bit Floating Point [13.693064349530795]
Recent work has shown that 8-bit floating point (FP8) can be used for efficiently training neural networks.
We present a novel method for combining FP8 client training while maintaining a global FP32 server model.
arXiv Detail & Related papers (2024-07-02T18:55:58Z) - FP8-BERT: Post-Training Quantization for Transformer [20.51143486483669]
Transformer-based models, such as BERT, require massive memory storage and inference cost when deployed in production.
New numeric format FP8 has been proposed and supported in commercial AI computing platforms such as H100.
We empirically validate the effectiveness of FP8 as a way to do Post-Training Quantization without significant loss of accuracy.
arXiv Detail & Related papers (2023-12-10T02:14:34Z) - FP8-LM: Training FP8 Large Language Models [47.17804713425323]
In this paper, we propose a new FP8 automatic mixed-precision framework for training large language models.
Experiment results show that, during the training of GPT-175B model on H100 GPU platform, our FP8 mixed-precision training framework not only achieved a remarkable 39% reduction in real memory usage but also ran 75% faster than the widely adopted BF16 framework.
arXiv Detail & Related papers (2023-10-27T17:59:51Z) - FP8 Formats for Deep Learning [49.54015320992368]
We propose an 8-bit floating point (FP8) binary interchange format consisting of two encodings.
E4M3's dynamic range is extended by not representing infinities and having only one mantissa bit-pattern for NaNs.
We demonstrate the efficacy of the FP8 format on a variety of image and language tasks, effectively matching the result quality achieved by 16-bit training sessions.
arXiv Detail & Related papers (2022-09-12T17:39:55Z) - FP8 Quantization: The Power of the Exponent [19.179749424362686]
This paper investigates the benefit of the floating point format for neural network inference.
We detail the choices that can be made for the FP8 format, including the important choice of the number of bits for the mantissa and exponent.
We show how these findings translate to real networks, provide an efficient implementation for FP8 simulation, and a new algorithm.
arXiv Detail & Related papers (2022-08-19T09:03:00Z) - Receptive Field-based Segmentation for Distributed CNN Inference
Acceleration in Collaborative Edge Computing [93.67044879636093]
We study inference acceleration using distributed convolutional neural networks (CNNs) in collaborative edge computing network.
We propose a novel collaborative edge computing using fused-layer parallelization to partition a CNN model into multiple blocks of convolutional layers.
arXiv Detail & Related papers (2022-07-22T18:38:11Z) - Efficient Integer-Arithmetic-Only Convolutional Neural Networks [87.01739569518513]
We replace conventional ReLU with Bounded ReLU and find that the decline is due to activation quantization.
Our integer networks achieve equivalent performance as the corresponding FPN networks, but have only 1/4 memory cost and run 2x faster on modern GPU.
arXiv Detail & Related papers (2020-06-21T08:23:03Z) - Towards Unified INT8 Training for Convolutional Neural Network [83.15673050981624]
We build a unified 8-bit (INT8) training framework for common convolutional neural networks.
First, we empirically find the four distinctive characteristics of gradients, which provide us insightful clues for gradient quantization.
We propose two universal techniques, including Direction Sensitive Gradient Clipping that reduces the direction deviation of gradients.
arXiv Detail & Related papers (2019-12-29T08:37:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.