Compressed Real Numbers for AI: a case-study using a RISC-V CPU
- URL: http://arxiv.org/abs/2309.07158v1
- Date: Mon, 11 Sep 2023 07:54:28 GMT
- Title: Compressed Real Numbers for AI: a case-study using a RISC-V CPU
- Authors: Federico Rossi, Marco Cococcioni, Roger Ferrer Ib\`a\~nez, Jes\`us
Labarta, Filippo Mantovani, Marc Casas, Emanuele Ruffaldi and Sergio Saponara
- Abstract summary: We focus on two families of formats that have achieved interesting results in compressing binary32 numbers in machine learning applications.
We propose a way to decompress a tensor of bfloat/posits just before computations.
- Score: 2.0516276923852415
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As recently demonstrated, Deep Neural Networks (DNN), usually trained using
single precision IEEE 754 floating point numbers (binary32), can also work
using lower precision. Therefore, 16-bit and 8-bit compressed format have
attracted considerable attention. In this paper, we focused on two families of
formats that have already achieved interesting results in compressing binary32
numbers in machine learning applications, without sensible degradation of the
accuracy: bfloat and posit. Even if 16-bit and 8-bit bfloat/posit are routinely
used for reducing the storage of the weights/biases of trained DNNs, the
inference still often happens on the 32-bit FPU of the CPU (especially if GPUs
are not available). In this paper we propose a way to decompress a tensor of
bfloat/posits just before computations, i.e., after the compressed operands
have been loaded within the vector registers of a vector capable CPU, in order
to save bandwidth usage and increase cache efficiency. Finally, we show the
architectural parameters and considerations under which this solution is
advantageous with respect to the uncompressed one.
Related papers
- Shedding the Bits: Pushing the Boundaries of Quantization with Minifloats on FPGAs [39.410068572891475]
Post-training quantization (PTQ) is a powerful technique for model compression, reducing the numerical precision in neural networks without additional training overhead.
Recent works have investigated adopting 8-bit floating-point formats(FP8) in the context of PTQ for model inference.
We present minifloats, which are reduced-precision floating-point formats capable of further reducing the memory footprint, latency, and energy cost of a model.
arXiv Detail & Related papers (2023-11-21T05:27:16Z) - INR-Arch: A Dataflow Architecture and Compiler for Arbitrary-Order
Gradient Computations in Implicit Neural Representation Processing [66.00729477511219]
Given a function represented as a computation graph, traditional architectures face challenges in efficiently computing its nth-order gradient.
We introduce INR-Arch, a framework that transforms the computation graph of an nth-order gradient into a hardware-optimized dataflow architecture.
We present results that demonstrate 1.8-4.8x and 1.5-3.6x speedup compared to CPU and GPU baselines respectively.
arXiv Detail & Related papers (2023-08-11T04:24:39Z) - DeepGEMM: Accelerated Ultra Low-Precision Inference on CPU Architectures
using Lookup Tables [49.965024476651706]
DeepGEMM is a lookup table based approach for the execution of ultra low-precision convolutional neural networks on SIMD hardware.
Our implementation outperforms corresponding 8-bit integer kernels by up to 1.74x on x86 platforms.
arXiv Detail & Related papers (2023-04-18T15:13:10Z) - Exploiting Kernel Compression on BNNs [0.0]
In this work, we observe that the number of unique sequences representing a set of weights is typically low.
We propose a clustering scheme to identify the most common sequences of bits and replace the less common ones with some similar common sequences.
Our experimental results show that our technique can reduce memory requirement by 1.32x and improve performance by 1.35x.
arXiv Detail & Related papers (2022-12-01T16:05:10Z) - FP8 Formats for Deep Learning [49.54015320992368]
We propose an 8-bit floating point (FP8) binary interchange format consisting of two encodings.
E4M3's dynamic range is extended by not representing infinities and having only one mantissa bit-pattern for NaNs.
We demonstrate the efficacy of the FP8 format on a variety of image and language tasks, effectively matching the result quality achieved by 16-bit training sessions.
arXiv Detail & Related papers (2022-09-12T17:39:55Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - PositNN: Training Deep Neural Networks with Mixed Low-Precision Posit [5.534626267734822]
The presented research aims to evaluate the feasibility to train deep convolutional neural networks using posits.
A software framework was developed to use simulated posits and quires in end-to-end training and inference.
Results suggest that 8-bit posits can substitute 32-bit floats during training with no negative impact on the resulting loss and accuracy.
arXiv Detail & Related papers (2021-04-30T19:30:37Z) - Efficient and Generic 1D Dilated Convolution Layer for Deep Learning [52.899995651639436]
We introduce our efficient implementation of a generic 1D convolution layer covering a wide range of parameters.
It is optimized for x86 CPU architectures, in particular, for architectures containing Intel AVX-512 and AVX-512 BFloat16 instructions.
We demonstrate the performance of our optimized 1D convolution layer by utilizing it in the end-to-end neural network training with real genomics datasets.
arXiv Detail & Related papers (2021-04-16T09:54:30Z) - Representation range needs for 16-bit neural network training [2.2657486535885094]
In floating-point arithmetic there is a tradeoff between precision and representation range as the number of exponent bits changes.
We propose a 1/6/9 format, i.e., 6-bit exponent and 9-bit explicit mantissa, that offers a better range-precision tradeoff.
We show that 1/6/9 mixed-precision training is able to speed up training on hardware that incurs a performance slowdown on denormal operations.
arXiv Detail & Related papers (2021-03-29T20:30:02Z) - FBGEMM: Enabling High-Performance Low-Precision Deep Learning Inference [1.1292678337479967]
fbgemm is a high-performance kernel library for high-performance quantized inference on current generation CPUs.
fbgemm achieves efficiency by fusing common quantization operations with a high-performance gemm implementation and by shape- and size-specific kernel code generation at runtime.
The library has been deployed at Facebook, where it delivers greater than 2x performance gains with respect to our current production baseline.
arXiv Detail & Related papers (2021-01-13T00:34:04Z) - Efficient Integer-Arithmetic-Only Convolutional Neural Networks [87.01739569518513]
We replace conventional ReLU with Bounded ReLU and find that the decline is due to activation quantization.
Our integer networks achieve equivalent performance as the corresponding FPN networks, but have only 1/4 memory cost and run 2x faster on modern GPU.
arXiv Detail & Related papers (2020-06-21T08:23:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.