EQ-Net: Elastic Quantization Neural Networks
- URL: http://arxiv.org/abs/2308.07650v1
- Date: Tue, 15 Aug 2023 08:57:03 GMT
- Title: EQ-Net: Elastic Quantization Neural Networks
- Authors: Ke Xu and Lei Han and Ye Tian and Shangshang Yang and Xingyi Zhang
- Abstract summary: Elastic Quantization Neural Networks (EQ-Net) aims to train a robust weight-sharing quantization supernet.
We propose an elastic quantization space (including elastic bit-width, granularity, and symmetry) to adapt to various mainstream quantitative forms.
We incorporate genetic algorithms and the proposed Conditional Quantization-Aware Conditional Accuracy Predictor (CQAP) as an estimator to quickly search mixed-precision quantized neural networks in supernet.
- Score: 15.289359357583079
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current model quantization methods have shown their promising capability in
reducing storage space and computation complexity. However, due to the
diversity of quantization forms supported by different hardware, one limitation
of existing solutions is that usually require repeated optimization for
different scenarios. How to construct a model with flexible quantization forms
has been less studied. In this paper, we explore a one-shot network
quantization regime, named Elastic Quantization Neural Networks (EQ-Net), which
aims to train a robust weight-sharing quantization supernet. First of all, we
propose an elastic quantization space (including elastic bit-width,
granularity, and symmetry) to adapt to various mainstream quantitative forms.
Secondly, we propose the Weight Distribution Regularization Loss (WDR-Loss) and
Group Progressive Guidance Loss (GPG-Loss) to bridge the inconsistency of the
distribution for weights and output logits in the elastic quantization space
gap. Lastly, we incorporate genetic algorithms and the proposed Conditional
Quantization-Aware Accuracy Predictor (CQAP) as an estimator to quickly search
mixed-precision quantized neural networks in supernet. Extensive experiments
demonstrate that our EQ-Net is close to or even better than its static
counterparts as well as state-of-the-art robust bit-width methods. Code can be
available at
\href{https://github.com/xuke225/EQ-Net.git}{https://github.com/xuke225/EQ-Net}.
Related papers
- Post-Training Quantization for Re-parameterization via Coarse & Fine
Weight Splitting [13.270381125055275]
We propose a coarse & fine weight splitting (CFWS) method to reduce quantization error of weight.
We develop an improved KL metric to determine optimal quantization scales for activation.
For example, the quantized RepVGG-A1 model exhibits a mere 0.3% accuracy loss.
arXiv Detail & Related papers (2023-12-17T02:31:20Z) - Distribution-Flexible Subset Quantization for Post-Quantizing
Super-Resolution Networks [68.83451203841624]
This paper introduces Distribution-Flexible Subset Quantization (DFSQ), a post-training quantization method for super-resolution networks.
DFSQ conducts channel-wise normalization of the activations and applies distribution-flexible subset quantization (SQ)
It achieves comparable performance to full-precision counterparts on 6- and 8-bit quantization, and incurs only a 0.1 dB PSNR drop on 4-bit quantization.
arXiv Detail & Related papers (2023-05-10T04:19:11Z) - Towards Neural Variational Monte Carlo That Scales Linearly with System
Size [67.09349921751341]
Quantum many-body problems are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors.
The combination of neural networks (NN) for representing quantum states, and the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems.
We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm.
arXiv Detail & Related papers (2022-12-21T19:00:04Z) - Vertical Layering of Quantized Neural Networks for Heterogeneous
Inference [57.42762335081385]
We study a new vertical-layered representation of neural network weights for encapsulating all quantized models into a single one.
We can theoretically achieve any precision network for on-demand service while only needing to train and maintain one model.
arXiv Detail & Related papers (2022-12-10T15:57:38Z) - Post-training Quantization for Neural Networks with Provable Guarantees [9.58246628652846]
We modify a post-training neural-network quantization method, GPFQ, that is based on a greedy path-following mechanism.
We prove that for quantizing a single-layer network, the relative square error essentially decays linearly in the number of weights.
arXiv Detail & Related papers (2022-01-26T18:47:38Z) - Cluster-Promoting Quantization with Bit-Drop for Minimizing Network
Quantization Loss [61.26793005355441]
Cluster-Promoting Quantization (CPQ) finds the optimal quantization grids for neural networks.
DropBits is a new bit-drop technique that revises the standard dropout regularization to randomly drop bits instead of neurons.
We experimentally validate our method on various benchmark datasets and network architectures.
arXiv Detail & Related papers (2021-09-05T15:15:07Z) - BatchQuant: Quantized-for-all Architecture Search with Robust Quantizer [10.483508279350195]
BatchQuant is a robust quantizer formulation that allows fast and stable training of a compact, single-shot, mixed-precision, weight-sharing supernet.
We demonstrate the effectiveness of our method on ImageNet and achieve SOTA Top-1 accuracy under a low complexity constraint.
arXiv Detail & Related papers (2021-05-19T06:56:43Z) - One Model for All Quantization: A Quantized Network Supporting Hot-Swap
Bit-Width Adjustment [36.75157407486302]
We propose a method to train a model for all quantization that supports diverse bit-widths.
We use wavelet decomposition and reconstruction to increase the diversity of weights.
Our method can achieve accuracy comparable to dedicated models trained at the same precision.
arXiv Detail & Related papers (2021-05-04T08:10:50Z) - Searching for Low-Bit Weights in Quantized Neural Networks [129.8319019563356]
Quantized neural networks with low-bit weights and activations are attractive for developing AI accelerators.
We present to regard the discrete weights in an arbitrary quantized neural network as searchable variables, and utilize a differential method to search them accurately.
arXiv Detail & Related papers (2020-09-18T09:13:26Z) - Gradient $\ell_1$ Regularization for Quantization Robustness [70.39776106458858]
We derive a simple regularization scheme that improves robustness against post-training quantization.
By training quantization-ready networks, our approach enables storing a single set of weights that can be quantized on-demand to different bit-widths.
arXiv Detail & Related papers (2020-02-18T12:31:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.