Generative Low-bitwidth Data Free Quantization
- URL: http://arxiv.org/abs/2003.03603v3
- Date: Mon, 10 Aug 2020 12:56:06 GMT
- Title: Generative Low-bitwidth Data Free Quantization
- Authors: Shoukai Xu, Haokun Li, Bohan Zhuang, Jing Liu, Jiezhang Cao, Chuangrun
Liang, Mingkui Tan
- Abstract summary: We propose Generative Low-bitwidth Data Free Quantization (GDFQ) to remove the data dependence burden.
With the help of generated data, we can quantize a model by learning knowledge from the pre-trained model.
Our method achieves much higher accuracy on 4-bit quantization than the existing data free quantization method.
- Score: 44.613912463011545
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural network quantization is an effective way to compress deep models and
improve their execution latency and energy efficiency, so that they can be
deployed on mobile or embedded devices. Existing quantization methods require
original data for calibration or fine-tuning to get better performance.
However, in many real-world scenarios, the data may not be available due to
confidential or private issues, thereby making existing quantization methods
not applicable. Moreover, due to the absence of original data, the recently
developed generative adversarial networks (GANs) cannot be applied to generate
data. Although the full-precision model may contain rich data information, such
information alone is hard to exploit for recovering the original data or
generating new meaningful data. In this paper, we investigate a
simple-yet-effective method called Generative Low-bitwidth Data Free
Quantization (GDFQ) to remove the data dependence burden. Specifically, we
propose a knowledge matching generator to produce meaningful fake data by
exploiting classification boundary knowledge and distribution information in
the pre-trained model. With the help of generated data, we can quantize a model
by learning knowledge from the pre-trained model. Extensive experiments on
three data sets demonstrate the effectiveness of our method. More critically,
our method achieves much higher accuracy on 4-bit quantization than the
existing data free quantization method. Code is available at
https://github.com/xushoukai/GDFQ.
Related papers
- TCGU: Data-centric Graph Unlearning based on Transferable Condensation [36.670771080732486]
Transferable Condensation Graph Unlearning (TCGU) is a data-centric solution to zero-glance graph unlearning.
We show that TCGU can achieve superior performance in terms of model utility, unlearning efficiency, and unlearning efficacy than existing GU methods.
arXiv Detail & Related papers (2024-10-09T02:14:40Z) - GenQ: Quantization in Low Data Regimes with Generative Synthetic Data [28.773641633757283]
We introduce GenQ, a novel approach employing an advanced Generative AI model to generate high-resolution synthetic data.
In case of limited data availability, the actual data is used to guide the synthetic data generation process.
Through rigorous experimentation, GenQ establishes new benchmarks in data-free and data-scarce quantization.
arXiv Detail & Related papers (2023-12-07T23:31:42Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Exploring Data Redundancy in Real-world Image Classification through
Data Selection [20.389636181891515]
Deep learning models often require large amounts of data for training, leading to increased costs.
We present two data valuation metrics based on Synaptic Intelligence and gradient norms, respectively, to study redundancy in real-world image data.
Online and offline data selection algorithms are then proposed via clustering and grouping based on the examined data values.
arXiv Detail & Related papers (2023-06-25T03:31:05Z) - Post-training Model Quantization Using GANs for Synthetic Data
Generation [57.40733249681334]
We investigate the use of synthetic data as a substitute for the calibration with real data for the quantization method.
We compare the performance of models quantized using data generated by StyleGAN2-ADA and our pre-trained DiStyleGAN, with quantization using real data and an alternative data generation method based on fractal images.
arXiv Detail & Related papers (2023-05-10T11:10:09Z) - AI Model Disgorgement: Methods and Choices [127.54319351058167]
We introduce a taxonomy of possible disgorgement methods that are applicable to modern machine learning systems.
We investigate the meaning of "removing the effects" of data in the trained model in a way that does not require retraining from scratch.
arXiv Detail & Related papers (2023-04-07T08:50:18Z) - ClusterQ: Semantic Feature Distribution Alignment for Data-Free
Quantization [111.12063632743013]
We propose a new and effective data-free quantization method termed ClusterQ.
To obtain high inter-class separability of semantic features, we cluster and align the feature distribution statistics.
We also incorporate the intra-class variance to solve class-wise mode collapse.
arXiv Detail & Related papers (2022-04-30T06:58:56Z) - Efficient training of lightweight neural networks using Online
Self-Acquired Knowledge Distillation [51.66271681532262]
Online Self-Acquired Knowledge Distillation (OSAKD) is proposed, aiming to improve the performance of any deep neural model in an online manner.
We utilize k-nn non-parametric density estimation technique for estimating the unknown probability distributions of the data samples in the output feature space.
arXiv Detail & Related papers (2021-08-26T14:01:04Z) - Zero-shot Adversarial Quantization [11.722728148523366]
We propose a zero-shot adversarial quantization (ZAQ) framework, facilitating effective discrepancy estimation and knowledge transfer.
This is achieved by a novel two-level discrepancy modeling to drive a generator to synthesize informative and diverse data examples.
We conduct extensive experiments on three fundamental vision tasks, demonstrating the superiority of ZAQ over the strong zero-shot baselines.
arXiv Detail & Related papers (2021-03-29T01:33:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.