All-You-Can-Fit 8-Bit Flexible Floating-Point Format for Accurate and
Memory-Efficient Inference of Deep Neural Networks
- URL: http://arxiv.org/abs/2104.07329v1
- Date: Thu, 15 Apr 2021 09:37:23 GMT
- Title: All-You-Can-Fit 8-Bit Flexible Floating-Point Format for Accurate and
Memory-Efficient Inference of Deep Neural Networks
- Authors: Cheng-Wei Huang, Tim-Wei Chen, and Juinn-Dar Huang
- Abstract summary: This paper introduces an extremely flexible 8-bit floating-point (FFP8) format.
It achieves an extremely low accuracy loss of $0.1%sim 0.3%$ for several representative image classification models.
It is easy to turn a classical floating-point processing unit into an FFP8-compliant one, and the extra hardware cost is minor.
- Score: 2.294014185517203
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern deep neural network (DNN) models generally require a huge amount of
weight and activation values to achieve good inference outcomes. Those data
inevitably demand a massive off-chip memory capacity/bandwidth, and the
situation gets even worse if they are represented in high-precision
floating-point formats. Effort has been made for representing those data in
different 8-bit floating-point formats, nevertheless, a notable accuracy loss
is still unavoidable. In this paper we introduce an extremely flexible 8-bit
floating-point (FFP8) format whose defining factors - the bit width of
exponent/fraction field, the exponent bias, and even the presence of the sign
bit - are all configurable. We also present a methodology to properly determine
those factors so that the accuracy of model inference can be maximized. The
foundation of this methodology is based on a key observation - both the maximum
magnitude and the value distribution are quite dissimilar between weights and
activations in most DNN models. Experimental results demonstrate that the
proposed FFP8 format achieves an extremely low accuracy loss of $0.1\%\sim
0.3\%$ for several representative image classification models even without the
need of model retraining. Besides, it is easy to turn a classical
floating-point processing unit into an FFP8-compliant one, and the extra
hardware cost is minor.
Related papers
- "Give Me BF16 or Give Me Death"? Accuracy-Performance Trade-Offs in LLM Quantization [67.3213104337679]
We evaluate popular quantization formats across academic benchmarks and real-world tasks.
We find that W4A16 offers the best costefficiency for synchronous deployments, and for asynchronous deployment on mid-tier architectures.
arXiv Detail & Related papers (2024-11-04T18:21:59Z) - Just How Flexible are Neural Networks in Practice? [89.80474583606242]
It is widely believed that a neural network can fit a training set containing at least as many samples as it has parameters.
In practice, however, we only find solutions via our training procedure, including the gradient and regularizers, limiting flexibility.
arXiv Detail & Related papers (2024-06-17T12:24:45Z) - Shedding the Bits: Pushing the Boundaries of Quantization with Minifloats on FPGAs [39.410068572891475]
Post-training quantization (PTQ) is a powerful technique for model compression, reducing the numerical precision in neural networks without additional training overhead.
Recent works have investigated adopting 8-bit floating-point formats(FP8) in the context of PTQ for model inference.
We present minifloats, which are reduced-precision floating-point formats capable of further reducing the memory footprint, latency, and energy cost of a model.
arXiv Detail & Related papers (2023-11-21T05:27:16Z) - The case for 4-bit precision: k-bit Inference Scaling Laws [75.4335600212427]
Quantization methods reduce the number of bits required to represent each parameter in a model.
The final model size depends on both the number of parameters of the original model and the rate of compression.
We run more than 35,000 zero-shot experiments with 16-bit inputs and k-bit parameters to examine which quantization methods improve scaling for 3 to 8-bit precision.
arXiv Detail & Related papers (2022-12-19T18:48:33Z) - FP8 Formats for Deep Learning [49.54015320992368]
We propose an 8-bit floating point (FP8) binary interchange format consisting of two encodings.
E4M3's dynamic range is extended by not representing infinities and having only one mantissa bit-pattern for NaNs.
We demonstrate the efficacy of the FP8 format on a variety of image and language tasks, effectively matching the result quality achieved by 16-bit training sessions.
arXiv Detail & Related papers (2022-09-12T17:39:55Z) - FP8 Quantization: The Power of the Exponent [19.179749424362686]
This paper investigates the benefit of the floating point format for neural network inference.
We detail the choices that can be made for the FP8 format, including the important choice of the number of bits for the mantissa and exponent.
We show how these findings translate to real networks, provide an efficient implementation for FP8 simulation, and a new algorithm.
arXiv Detail & Related papers (2022-08-19T09:03:00Z) - 8-bit Numerical Formats for Deep Neural Networks [1.304892050913381]
We present an in-depth study on the use of 8-bit floating-point number formats for activations, weights, and gradients for both training and inference.
Experiments demonstrate that a suitable choice of these low-precision formats enables faster training and reduced power consumption without any degradation in accuracy for a range of deep learning models for image classification and language processing.
arXiv Detail & Related papers (2022-06-06T21:31:32Z) - PositNN: Training Deep Neural Networks with Mixed Low-Precision Posit [5.534626267734822]
The presented research aims to evaluate the feasibility to train deep convolutional neural networks using posits.
A software framework was developed to use simulated posits and quires in end-to-end training and inference.
Results suggest that 8-bit posits can substitute 32-bit floats during training with no negative impact on the resulting loss and accuracy.
arXiv Detail & Related papers (2021-04-30T19:30:37Z) - Efficient Integer-Arithmetic-Only Convolutional Neural Networks [87.01739569518513]
We replace conventional ReLU with Bounded ReLU and find that the decline is due to activation quantization.
Our integer networks achieve equivalent performance as the corresponding FPN networks, but have only 1/4 memory cost and run 2x faster on modern GPU.
arXiv Detail & Related papers (2020-06-21T08:23:03Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z) - Shifted and Squeezed 8-bit Floating Point format for Low-Precision
Training of Deep Neural Networks [13.929168096016957]
We introduce a novel methodology for training deep neural networks using 8-bit floating point (FP8) numbers.
Reduced bit precision allows for a larger effective memory and increased computational speed.
We show that, unlike previous 8-bit precision training methods, the proposed method works out-of-the-box for representative models.
arXiv Detail & Related papers (2020-01-16T06:38:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.