A2Q+: Improving Accumulator-Aware Weight Quantization
- URL: http://arxiv.org/abs/2401.10432v1
- Date: Fri, 19 Jan 2024 00:27:34 GMT
- Title: A2Q+: Improving Accumulator-Aware Weight Quantization
- Authors: Ian Colbert, Alessandro Pappalardo, Jakoba Petri-Koenig, Yaman
Umuroglu
- Abstract summary: Quantization techniques commonly reduce the inference costs of neural networks by restricting the precision of weights and activations.
Recent work proposed accumulator-aware quantization (A2Q), a quantization-aware training method that constrains model weights during training to safely use a target accumulator bit width during inference.
We introduce A2Q+, a new strategy for initializing quantized weights from pre-trained floating-point checkpoints.
- Score: 45.14832807541816
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantization techniques commonly reduce the inference costs of neural
networks by restricting the precision of weights and activations. Recent
studies show that also reducing the precision of the accumulator can further
improve hardware efficiency at the risk of numerical overflow, which introduces
arithmetic errors that can degrade model accuracy. To avoid numerical overflow
while maintaining accuracy, recent work proposed accumulator-aware quantization
(A2Q), a quantization-aware training method that constrains model weights
during training to safely use a target accumulator bit width during inference.
Although this shows promise, we demonstrate that A2Q relies on an overly
restrictive constraint and a sub-optimal weight initialization strategy that
each introduce superfluous quantization error. To address these shortcomings,
we introduce: (1) an improved bound that alleviates accumulator constraints
without compromising overflow avoidance; and (2) a new strategy for
initializing quantized weights from pre-trained floating-point checkpoints. We
combine these contributions with weight normalization to introduce A2Q+. We
support our analysis with experiments that show A2Q+ significantly improves the
trade-off between accumulator bit width and model accuracy and characterize new
trade-offs that arise as a consequence of accumulator constraints.
Related papers
- GWQ: Gradient-Aware Weight Quantization for Large Language Models [61.17678373122165]
gradient-aware weight quantization (GWQ) is the first quantization approach for low-bit weight quantization that leverages gradients to localize outliers.
GWQ retains the corresponding to the top 1% outliers preferentially at FP16 precision, while the remaining non-outlier weights are stored in a low-bit format.
In the zero-shot task, GWQ quantized models have higher accuracy compared to other quantization methods.
arXiv Detail & Related papers (2024-10-30T11:16:04Z) - QERA: an Analytical Framework for Quantization Error Reconstruction [12.110441045050223]
An increasing interest in quantizing weights to extremely low precision while offsetting the resulting error with low-rank, high-precision error reconstruction terms.
The combination of quantization and low-rank approximation is now popular in both adapter-based, parameter-efficient fine-tuning methods.
We formulate an analytical framework, named Quantization Error Reconstruction Analysis (QERA), and offer a closed-form solution to the problem.
arXiv Detail & Related papers (2024-10-08T13:37:34Z) - OAC: Output-adaptive Calibration for Accurate Post-training Quantization [30.115888331426515]
Post-training Quantization (PTQ) techniques have been developed to compress Large Language Models (LLMs)
Most PTQ approaches formulate the quantization error based on a calibrated layer-wise $ell$ loss.
We propose Output-adaptive (OAC) to incorporate the model output in the calibration process.
arXiv Detail & Related papers (2024-05-23T20:01:17Z) - Post-Training Quantization for Re-parameterization via Coarse & Fine
Weight Splitting [13.270381125055275]
We propose a coarse & fine weight splitting (CFWS) method to reduce quantization error of weight.
We develop an improved KL metric to determine optimal quantization scales for activation.
For example, the quantized RepVGG-A1 model exhibits a mere 0.3% accuracy loss.
arXiv Detail & Related papers (2023-12-17T02:31:20Z) - A2Q: Accumulator-Aware Quantization with Guaranteed Overflow Avoidance [49.1574468325115]
accumulator-aware quantization (A2Q) is a novel weight quantization method designed to train quantized neural networks (QNNs) to avoid overflow during inference.
A2Q introduces a unique formulation inspired by weight normalization that constrains the L1-norm of model weights according to accumulator bit width bounds.
We show A2Q can train QNNs for low-precision accumulators while maintaining model accuracy competitive with a floating-point baseline.
arXiv Detail & Related papers (2023-08-25T17:28:58Z) - Quantized Neural Networks for Low-Precision Accumulation with Guaranteed
Overflow Avoidance [68.8204255655161]
We introduce a quantization-aware training algorithm that guarantees avoiding numerical overflow when reducing the precision of accumulators during inference.
We evaluate our algorithm across multiple quantized models that we train for different tasks, showing that our approach can reduce the precision of accumulators while maintaining model accuracy with respect to a floating-point baseline.
arXiv Detail & Related papers (2023-01-31T02:46:57Z) - Direct Quantization for Training Highly Accurate Low Bit-width Deep
Neural Networks [73.29587731448345]
This paper proposes two novel techniques to train deep convolutional neural networks with low bit-width weights and activations.
First, to obtain low bit-width weights, most existing methods obtain the quantized weights by performing quantization on the full-precision network weights.
Second, to obtain low bit-width activations, existing works consider all channels equally.
arXiv Detail & Related papers (2020-12-26T15:21:18Z) - DAQ: Distribution-Aware Quantization for Deep Image Super-Resolution
Networks [49.191062785007006]
Quantizing deep convolutional neural networks for image super-resolution substantially reduces their computational costs.
Existing works either suffer from a severe performance drop in ultra-low precision of 4 or lower bit-widths, or require a heavy fine-tuning process to recover the performance.
We propose a novel distribution-aware quantization scheme (DAQ) which facilitates accurate training-free quantization in ultra-low precision.
arXiv Detail & Related papers (2020-12-21T10:19:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.