Analyzing Quantization in TVM
- URL: http://arxiv.org/abs/2308.10905v1
- Date: Sat, 19 Aug 2023 07:39:46 GMT
- Title: Analyzing Quantization in TVM
- Authors: Mingfei Guo
- Abstract summary: TVM has the ability to quantize weights and support low-bit computations.
8-bit quantization is usually expected to achieve around 50% of the full-precision inference time.
In this project, we thoroughly investigate the reasons behind the underperformance and assess the compatibility and optimization opportunities of 8-bit quantization in TVM.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There has been many papers in academic literature on quantizing weight
tensors in deep learning models to reduce inference latency and memory
footprint. TVM also has the ability to quantize weights and support low-bit
computations. Although quantization is typically expected to improve inference
time, in TVM, the performance of 8-bit quantization does not meet the
expectations. Typically, when applying 8-bit quantization to a deep learning
model, it is usually expected to achieve around 50% of the full-precision
inference time. However, in this particular case, not only does the quantized
version fail to achieve the desired performance boost, but it actually performs
worse, resulting in an inference time that is about 2 times as slow as the
non-quantized version. In this project, we thoroughly investigate the reasons
behind the underperformance and assess the compatibility and optimization
opportunities of 8-bit quantization in TVM. We discuss the optimization of two
different types of tasks: computation-bound and memory-bound, and provide a
detailed comparison of various optimization techniques in TVM. Through the
identification of performance issues, we have successfully improved
quantization by addressing a bug in graph building. Furthermore, we analyze
multiple optimization strategies to achieve the optimal quantization result.
The best experiment achieves 163.88% improvement compared with the TVM compiled
baseline in inference time for the compute-bound task and 194.98% for the
memory-bound task.
Related papers
- ISQuant: apply squant to the real deployment [0.0]
We analyze why the combination of quantization and dequantization is used to train the model.
We propose ISQuant as a solution for deploying 8-bit models.
arXiv Detail & Related papers (2024-07-05T15:10:05Z) - Atom: Low-bit Quantization for Efficient and Accurate LLM Serving [7.126191142715184]
We introduce Atom, a low-bit quantization method that achieves high throughput improvements with negligible accuracy loss.
Atom significantly boosts serving by using low-bit operators and considerably reduces memory consumption via low-bit quantization.
arXiv Detail & Related papers (2023-10-29T18:33:05Z) - On-Chip Hardware-Aware Quantization for Mixed Precision Neural Networks [52.97107229149988]
We propose an On-Chip Hardware-Aware Quantization framework, performing hardware-aware mixed-precision quantization on deployed edge devices.
For efficiency metrics, we built an On-Chip Quantization Aware pipeline, which allows the quantization process to perceive the actual hardware efficiency of the quantization operator.
For accuracy metrics, we propose Mask-Guided Quantization Estimation technology to effectively estimate the accuracy impact of operators in the on-chip scenario.
arXiv Detail & Related papers (2023-09-05T04:39:34Z) - F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization [47.403304754934155]
We present F8Net, a novel quantization framework consisting of only fixed-point 8-bit multiplication.
Our approach achieves comparable and better performance, when compared with existing quantization techniques.
arXiv Detail & Related papers (2022-02-10T18:48:56Z) - 8-bit Optimizers via Block-wise Quantization [57.25800395197516]
Statefuls maintain statistics over time, e.g., the exponentially smoothed sum (SGD with momentum) or squared sum (Adam) of past values.
This state can be used to accelerate optimization compared to plain gradient descent but uses memory that might otherwise be allocated to model parameters.
In this paper, we develop first gradients that use 8-bit statistics while maintaining the performance levels of using 32-bit gradient states.
arXiv Detail & Related papers (2021-10-06T15:43:20Z) - Post-Training Quantization for Vision Transformer [85.57953732941101]
We present an effective post-training quantization algorithm for reducing the memory storage and computational costs of vision transformers.
We can obtain an 81.29% top-1 accuracy using DeiT-B model on ImageNet dataset with about 8-bit quantization.
arXiv Detail & Related papers (2021-06-27T06:27:22Z) - Subtensor Quantization for Mobilenets [5.735035463793008]
Quantization for deep neural networks (DNN) have enabled developers to deploy models with less memory and more efficient low-power inference.
In this paper, we analyzed several root causes of quantization loss and proposed alternatives that do not rely on per-channel or training-aware approaches.
We evaluate the image classification task on ImageNet dataset, and our post-training quantized 8-bit inference top-1 accuracy in within 0.7% of the floating point version.
arXiv Detail & Related papers (2020-11-04T15:41:47Z) - Once Quantization-Aware Training: High Performance Extremely Low-bit
Architecture Search [112.05977301976613]
We propose to combine Network Architecture Search methods with quantization to enjoy the merits of the two sides.
We first propose the joint training of architecture and quantization with a shared step size to acquire a large number of quantized models.
Then a bit-inheritance scheme is introduced to transfer the quantized models to the lower bit, which further reduces the time cost and improves the quantization accuracy.
arXiv Detail & Related papers (2020-10-09T03:52:16Z) - Leveraging Automated Mixed-Low-Precision Quantization for tiny edge
microcontrollers [76.30674794049293]
This paper presents an automated mixed-precision quantization flow based on the HAQ framework but tailored for the memory and computational characteristics of MCU devices.
Specifically, a Reinforcement Learning agent searches for the best uniform quantization levels, among 2, 4, 8 bits, of individual weight and activation tensors.
Given an MCU-class memory bound to 2MB for weight-only quantization, the compressed models produced by the mixed-precision engine result as accurate as the state-of-the-art solutions.
arXiv Detail & Related papers (2020-08-12T06:09:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.