How to Parameterize Asymmetric Quantization Ranges for Quantization-Aware Training
- URL: http://arxiv.org/abs/2404.16898v1
- Date: Thu, 25 Apr 2024 06:58:16 GMT
- Title: How to Parameterize Asymmetric Quantization Ranges for Quantization-Aware Training
- Authors: Jaeseong You, Minseop Park, Kyunggeun Lee, Seokjun An, Chirag Patel, Markus Nage,
- Abstract summary: This paper investigates three different parameterizations of asymmetric uniform quantization for quantization-aware training.
We propose best practices to stabilize and accelerate quantization-aware training with learnable asymmetric quantization ranges.
- Score: 1.721868124457512
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper investigates three different parameterizations of asymmetric uniform quantization for quantization-aware training: (1) scale and offset, (2) minimum and maximum, and (3) beta and gamma. We perform a comprehensive comparative analysis of these parameterizations' influence on quantization-aware training, using both controlled experiments and real-world large language models. Our particular focus is on their changing behavior in response to critical training hyperparameters, bit width and learning rate. Based on our investigation, we propose best practices to stabilize and accelerate quantization-aware training with learnable asymmetric quantization ranges.
Related papers
- RoSTE: An Efficient Quantization-Aware Supervised Fine-Tuning Approach for Large Language Models [95.32315448601241]
We propose an algorithm named Rotated Straight-Through-Estimator (RoSTE)
RoSTE combines quantization-aware supervised fine-tuning (QA-SFT) with an adaptive rotation strategy to reduce activation outliers.
Our findings reveal that the prediction error is directly proportional to the quantization error of the converged weights, which can be effectively managed through an optimized rotation configuration.
arXiv Detail & Related papers (2025-02-13T06:44:33Z) - Scaling Exponents Across Parameterizations and Optimizers [94.54718325264218]
We propose a new perspective on parameterization by investigating a key assumption in prior work.
Our empirical investigation includes tens of thousands of models trained with all combinations of threes.
We find that the best learning rate scaling prescription would often have been excluded by the assumptions in prior work.
arXiv Detail & Related papers (2024-07-08T12:32:51Z) - WKVQuant: Quantizing Weight and Key/Value Cache for Large Language
Models Gains More [55.0856305773081]
Large Language Models (LLMs) face significant deployment challenges due to their substantial memory requirements and the computational demands of auto-regressive text generation process.
This paper addresses these challenges by focusing on the quantization of LLMs, a technique that reduces memory consumption by converting model parameters and activations into low-bit integers.
arXiv Detail & Related papers (2024-02-19T11:33:21Z) - Effect of Weight Quantization on Learning Models by Typical Case
Analysis [6.9060054915724]
The recent surge in data analysis scale has significantly increased computational resource requirements.
Quantization is vital for deploying large models on devices with limited computational resources.
arXiv Detail & Related papers (2024-01-30T18:58:46Z) - Supervised learning for robust quantum control in composite-pulse systems [7.474008952791777]
We develop a supervised learning model for implementing robust quantum control in composite-pulse systems.
This model exhibits great resistance to all kinds of systematic errors, including single, multiple, and time-varying errors.
This work provides a highly efficient learning model for fault-tolerant quantum computation by training various physical parameters.
arXiv Detail & Related papers (2023-08-23T01:37:13Z) - PreQuant: A Task-agnostic Quantization Approach for Pre-trained Language
Models [52.09865918265002]
We propose a novel quantize before fine-tuning'' framework, PreQuant.
PreQuant is compatible with various quantization strategies, with outlier-aware fine-tuning incorporated to correct the induced quantization error.
We demonstrate the effectiveness of PreQuant on the GLUE benchmark using BERT, RoBERTa, and T5.
arXiv Detail & Related papers (2023-05-30T08:41:33Z) - Variational quantum metrology for multiparameter estimation under
dephasing noise [0.8594140167290099]
We present a hybrid quantum-classical variational scheme to enhance precision in quantum metrology.
We discuss specific applications to 3D magnetic field sensing under several dephasing noise modes.
arXiv Detail & Related papers (2023-05-15T01:09:58Z) - Multi-objective hyperparameter optimization with performance uncertainty [62.997667081978825]
This paper presents results on multi-objective hyperparameter optimization with uncertainty on the evaluation of Machine Learning algorithms.
We combine the sampling strategy of Tree-structured Parzen Estimators (TPE) with the metamodel obtained after training a Gaussian Process Regression (GPR) with heterogeneous noise.
Experimental results on three analytical test functions and three ML problems show the improvement over multi-objective TPE and GPR.
arXiv Detail & Related papers (2022-09-09T14:58:43Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - Dimensional Expressivity Analysis of Parametric Quantum Circuits [0.0]
We show how to efficiently implement the expressivity analysis using quantum hardware.
We also discuss the effect of symmetries and demonstrate how to incorporate or remove symmetries from the parametrized ansatz.
arXiv Detail & Related papers (2020-11-06T18:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.