A2Q: Accumulator-Aware Quantization with Guaranteed Overflow Avoidance
- URL: http://arxiv.org/abs/2308.13504v1
- Date: Fri, 25 Aug 2023 17:28:58 GMT
- Title: A2Q: Accumulator-Aware Quantization with Guaranteed Overflow Avoidance
- Authors: Ian Colbert, Alessandro Pappalardo, Jakoba Petri-Koenig
- Abstract summary: accumulator-aware quantization (A2Q) is a novel weight quantization method designed to train quantized neural networks (QNNs) to avoid overflow during inference.
A2Q introduces a unique formulation inspired by weight normalization that constrains the L1-norm of model weights according to accumulator bit width bounds.
We show A2Q can train QNNs for low-precision accumulators while maintaining model accuracy competitive with a floating-point baseline.
- Score: 49.1574468325115
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present accumulator-aware quantization (A2Q), a novel weight quantization
method designed to train quantized neural networks (QNNs) to avoid overflow
when using low-precision accumulators during inference. A2Q introduces a unique
formulation inspired by weight normalization that constrains the L1-norm of
model weights according to accumulator bit width bounds that we derive. Thus,
in training QNNs for low-precision accumulation, A2Q also inherently promotes
unstructured weight sparsity to guarantee overflow avoidance. We apply our
method to deep learning-based computer vision tasks to show that A2Q can train
QNNs for low-precision accumulators while maintaining model accuracy
competitive with a floating-point baseline. In our evaluations, we consider the
impact of A2Q on both general-purpose platforms and programmable hardware.
However, we primarily target model deployment on FPGAs because they can be
programmed to fully exploit custom accumulator bit widths. Our experimentation
shows accumulator bit width significantly impacts the resource efficiency of
FPGA-based accelerators. On average across our benchmarks, A2Q offers up to a
2.3x reduction in resource utilization over 32-bit accumulator counterparts
with 99.2% of the floating-point model accuracy.
Related papers
- Trainable Fixed-Point Quantization for Deep Learning Acceleration on
FPGAs [30.325651150798915]
Quantization is a crucial technique for deploying deep learning models on resource-constrained devices, such as embedded FPGAs.
We present QFX, a trainable fixed-point quantization approach that automatically learns the binary-point position during model training.
QFX is implemented as a PyTorch-based library that efficiently emulates fixed-point arithmetic, supported by FPGA HLS.
arXiv Detail & Related papers (2024-01-31T02:18:27Z) - A2Q+: Improving Accumulator-Aware Weight Quantization [45.14832807541816]
Quantization techniques commonly reduce the inference costs of neural networks by restricting the precision of weights and activations.
Recent work proposed accumulator-aware quantization (A2Q), a quantization-aware training method that constrains model weights during training to safely use a target accumulator bit width during inference.
We introduce A2Q+, a new strategy for initializing quantized weights from pre-trained floating-point checkpoints.
arXiv Detail & Related papers (2024-01-19T00:27:34Z) - QUIK: Towards End-to-End 4-Bit Inference on Generative Large Language
Models [57.04178959678024]
We show that the majority of inference computations for large generative models can be performed with both weights and activations being cast to 4 bits.
We achieve this via a hybrid quantization strategy called QUIK, which compresses most of the weights and activations to 4-bit.
We provide GPU kernels matching the QUIK format with highly-efficient layer-wise runtimes, which lead to practical end-to-end throughput improvements of up to 3.4x.
arXiv Detail & Related papers (2023-10-13T17:15:05Z) - SqueezeLLM: Dense-and-Sparse Quantization [80.32162537942138]
Main bottleneck for generative inference with LLMs is memory bandwidth, rather than compute, for single batch inference.
We introduce SqueezeLLM, a post-training quantization framework that enables lossless compression to ultra-low precisions of up to 3-bit.
Our framework incorporates two novel ideas: (i) sensitivity-based non-uniform quantization, which searches for the optimal bit precision assignment based on second-order information; and (ii) the Dense-and-Sparse decomposition that stores outliers and sensitive weight values in an efficient sparse format.
arXiv Detail & Related papers (2023-06-13T08:57:54Z) - Quantized Neural Networks for Low-Precision Accumulation with Guaranteed
Overflow Avoidance [68.8204255655161]
We introduce a quantization-aware training algorithm that guarantees avoiding numerical overflow when reducing the precision of accumulators during inference.
We evaluate our algorithm across multiple quantized models that we train for different tasks, showing that our approach can reduce the precision of accumulators while maintaining model accuracy with respect to a floating-point baseline.
arXiv Detail & Related papers (2023-01-31T02:46:57Z) - Standard Deviation-Based Quantization for Deep Neural Networks [17.495852096822894]
Quantization of deep neural networks is a promising approach that reduces the inference cost.
We propose a new framework to learn the quantization intervals (discrete values) using the knowledge of the network's weight and activation distributions.
Our scheme simultaneously prunes the network's parameters and allows us to flexibly adjust the pruning ratio during the quantization process.
arXiv Detail & Related papers (2022-02-24T23:33:47Z) - APQ: Joint Search for Network Architecture, Pruning and Quantization
Policy [49.3037538647714]
We present APQ for efficient deep learning inference on resource-constrained hardware.
Unlike previous methods that separately search the neural architecture, pruning policy, and quantization policy, we optimize them in a joint manner.
With the same accuracy, APQ reduces the latency/energy by 2x/1.3x over MobileNetV2+HAQ.
arXiv Detail & Related papers (2020-06-15T16:09:17Z) - Quantization of Deep Neural Networks for Accumulator-constrained
Processors [2.8489574654566674]
We introduce an Artificial Neural Network (ANN) quantization methodology for platforms without wide accumulation registers.
We formulate the quantization problem as a function of accumulator size, and aim to maximize the model accuracy by maximizing bit width of input data and weights.
We demonstrate that 16-bit accumulators are able to obtain a classification accuracy within 1% of the floating-point baselines on the CIFAR-10 and ILSVRC2012 image classification benchmarks.
arXiv Detail & Related papers (2020-04-24T14:47:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.