FP8 Quantization: The Power of the Exponent
- URL: http://arxiv.org/abs/2208.09225v2
- Date: Fri, 23 Feb 2024 13:49:45 GMT
- Title: FP8 Quantization: The Power of the Exponent
- Authors: Andrey Kuzmin, Mart Van Baalen, Yuwei Ren, Markus Nagel, Jorn Peters,
Tijmen Blankevoort
- Abstract summary: This paper investigates the benefit of the floating point format for neural network inference.
We detail the choices that can be made for the FP8 format, including the important choice of the number of bits for the mantissa and exponent.
We show how these findings translate to real networks, provide an efficient implementation for FP8 simulation, and a new algorithm.
- Score: 19.179749424362686
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When quantizing neural networks for efficient inference, low-bit integers are
the go-to format for efficiency. However, low-bit floating point numbers have
an extra degree of freedom, assigning some bits to work on an exponential scale
instead. This paper in-depth investigates this benefit of the floating point
format for neural network inference. We detail the choices that can be made for
the FP8 format, including the important choice of the number of bits for the
mantissa and exponent, and show analytically in which settings these choices
give better performance. Then we show how these findings translate to real
networks, provide an efficient implementation for FP8 simulation, and a new
algorithm that enables the learning of both the scale parameters and the number
of exponent bits in the FP8 format. Our chief conclusion is that when doing
post-training quantization for a wide range of networks, the FP8 format is
better than INT8 in terms of accuracy, and the choice of the number of exponent
bits is driven by the severity of outliers in the network. We also conduct
experiments with quantization-aware training where the difference in formats
disappears as the network is trained to reduce the effect of outliers.
Related papers
- FP8 versus INT8 for efficient deep learning inference [14.98281493168929]
We compare the performance for both the FP8 and INT formats for efficient on-device inference.
We show that the FP formats are somewhere between 50-180% less efficient in terms of compute in dedicated hardware than the INT format.
We conclude that although the proposed FP8 format could be good for training, the results for inference do not warrant a dedicated implementation of FP8.
arXiv Detail & Related papers (2023-03-31T10:29:17Z) - FP8 Formats for Deep Learning [49.54015320992368]
We propose an 8-bit floating point (FP8) binary interchange format consisting of two encodings.
E4M3's dynamic range is extended by not representing infinities and having only one mantissa bit-pattern for NaNs.
We demonstrate the efficacy of the FP8 format on a variety of image and language tasks, effectively matching the result quality achieved by 16-bit training sessions.
arXiv Detail & Related papers (2022-09-12T17:39:55Z) - 8-bit Numerical Formats for Deep Neural Networks [1.304892050913381]
We present an in-depth study on the use of 8-bit floating-point number formats for activations, weights, and gradients for both training and inference.
Experiments demonstrate that a suitable choice of these low-precision formats enables faster training and reduced power consumption without any degradation in accuracy for a range of deep learning models for image classification and language processing.
arXiv Detail & Related papers (2022-06-06T21:31:32Z) - F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization [47.403304754934155]
We present F8Net, a novel quantization framework consisting of only fixed-point 8-bit multiplication.
Our approach achieves comparable and better performance, when compared with existing quantization techniques.
arXiv Detail & Related papers (2022-02-10T18:48:56Z) - PositNN: Training Deep Neural Networks with Mixed Low-Precision Posit [5.534626267734822]
The presented research aims to evaluate the feasibility to train deep convolutional neural networks using posits.
A software framework was developed to use simulated posits and quires in end-to-end training and inference.
Results suggest that 8-bit posits can substitute 32-bit floats during training with no negative impact on the resulting loss and accuracy.
arXiv Detail & Related papers (2021-04-30T19:30:37Z) - All-You-Can-Fit 8-Bit Flexible Floating-Point Format for Accurate and
Memory-Efficient Inference of Deep Neural Networks [2.294014185517203]
This paper introduces an extremely flexible 8-bit floating-point (FFP8) format.
It achieves an extremely low accuracy loss of $0.1%sim 0.3%$ for several representative image classification models.
It is easy to turn a classical floating-point processing unit into an FFP8-compliant one, and the extra hardware cost is minor.
arXiv Detail & Related papers (2021-04-15T09:37:23Z) - Searching for Low-Bit Weights in Quantized Neural Networks [129.8319019563356]
Quantized neural networks with low-bit weights and activations are attractive for developing AI accelerators.
We present to regard the discrete weights in an arbitrary quantized neural network as searchable variables, and utilize a differential method to search them accurately.
arXiv Detail & Related papers (2020-09-18T09:13:26Z) - Efficient Integer-Arithmetic-Only Convolutional Neural Networks [87.01739569518513]
We replace conventional ReLU with Bounded ReLU and find that the decline is due to activation quantization.
Our integer networks achieve equivalent performance as the corresponding FPN networks, but have only 1/4 memory cost and run 2x faster on modern GPU.
arXiv Detail & Related papers (2020-06-21T08:23:03Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z) - Shifted and Squeezed 8-bit Floating Point format for Low-Precision
Training of Deep Neural Networks [13.929168096016957]
We introduce a novel methodology for training deep neural networks using 8-bit floating point (FP8) numbers.
Reduced bit precision allows for a larger effective memory and increased computational speed.
We show that, unlike previous 8-bit precision training methods, the proposed method works out-of-the-box for representative models.
arXiv Detail & Related papers (2020-01-16T06:38:27Z) - Towards Unified INT8 Training for Convolutional Neural Network [83.15673050981624]
We build a unified 8-bit (INT8) training framework for common convolutional neural networks.
First, we empirically find the four distinctive characteristics of gradients, which provide us insightful clues for gradient quantization.
We propose two universal techniques, including Direction Sensitive Gradient Clipping that reduces the direction deviation of gradients.
arXiv Detail & Related papers (2019-12-29T08:37:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.