GenQ: Quantization in Low Data Regimes with Generative Synthetic Data
- URL: http://arxiv.org/abs/2312.05272v2
- Date: Fri, 8 Mar 2024 22:15:22 GMT
- Title: GenQ: Quantization in Low Data Regimes with Generative Synthetic Data
- Authors: Yuhang Li, Youngeun Kim, Donghyun Lee, Souvik Kundu, Priyadarshini
Panda
- Abstract summary: GenQ is a novel approach employing an advanced Generative AI model to generate high-resolution synthetic data.
In case of limited data availability, the actual data is used to guide the synthetic data generation process.
Through rigorous experimentation, GenQ establishes new benchmarks in data-free and data-scarce quantization.
- Score: 30.489005912126544
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In the realm of deep neural network deployment, low-bit quantization presents
a promising avenue for enhancing computational efficiency. However, it often
hinges on the availability of training data to mitigate quantization errors, a
significant challenge when data availability is scarce or restricted due to
privacy or copyright concerns. Addressing this, we introduce GenQ, a novel
approach employing an advanced Generative AI model to generate photorealistic,
high-resolution synthetic data, overcoming the limitations of traditional
methods that struggle to accurately mimic complex objects in extensive datasets
like ImageNet. Our methodology is underscored by two robust filtering
mechanisms designed to ensure the synthetic data closely aligns with the
intrinsic characteristics of the actual training data. In case of limited data
availability, the actual data is used to guide the synthetic data generation
process, enhancing fidelity through the inversion of learnable token
embeddings. Through rigorous experimentation, GenQ establishes new benchmarks
in data-free and data-scarce quantization, significantly outperforming existing
methods in accuracy and efficiency, thereby setting a new standard for
quantization in low data regimes.
Related papers
- Reimagining Synthetic Tabular Data Generation through Data-Centric AI: A
Comprehensive Benchmark [56.8042116967334]
Synthetic data serves as an alternative in training machine learning models.
ensuring that synthetic data mirrors the complex nuances of real-world data is a challenging task.
This paper explores the potential of integrating data-centric AI techniques to guide the synthetic data generation process.
arXiv Detail & Related papers (2023-10-25T20:32:02Z) - Synthetic data, real errors: how (not) to publish and use synthetic data [86.65594304109567]
We show how the generative process affects the downstream ML task.
We introduce Deep Generative Ensemble (DGE) to approximate the posterior distribution over the generative process model parameters.
arXiv Detail & Related papers (2023-05-16T07:30:29Z) - Post-training Model Quantization Using GANs for Synthetic Data
Generation [57.40733249681334]
We investigate the use of synthetic data as a substitute for the calibration with real data for the quantization method.
We compare the performance of models quantized using data generated by StyleGAN2-ADA and our pre-trained DiStyleGAN, with quantization using real data and an alternative data generation method based on fractal images.
arXiv Detail & Related papers (2023-05-10T11:10:09Z) - Augmented Bilinear Network for Incremental Multi-Stock Time-Series
Classification [83.23129279407271]
We propose a method to efficiently retain the knowledge available in a neural network pre-trained on a set of securities.
In our method, the prior knowledge encoded in a pre-trained neural network is maintained by keeping existing connections fixed.
This knowledge is adjusted for the new securities by a set of augmented connections, which are optimized using the new data.
arXiv Detail & Related papers (2022-07-23T18:54:10Z) - ClusterQ: Semantic Feature Distribution Alignment for Data-Free
Quantization [111.12063632743013]
We propose a new and effective data-free quantization method termed ClusterQ.
To obtain high inter-class separability of semantic features, we cluster and align the feature distribution statistics.
We also incorporate the intra-class variance to solve class-wise mode collapse.
arXiv Detail & Related papers (2022-04-30T06:58:56Z) - Diverse Sample Generation: Pushing the Limit of Data-free Quantization [85.95032037447454]
This paper presents a generic Diverse Sample Generation scheme for the generative data-free post-training quantization and quantization-aware training.
For large-scale image classification tasks, our DSG can consistently outperform existing data-free quantization methods.
arXiv Detail & Related papers (2021-09-01T07:06:44Z) - Towards Synthetic Multivariate Time Series Generation for Flare
Forecasting [5.098461305284216]
One of the limiting factors in training data-driven, rare-event prediction algorithms is the scarcity of the events of interest.
In this study, we explore the usefulness of the conditional generative adversarial network (CGAN) as a means to perform data-informed oversampling.
arXiv Detail & Related papers (2021-05-16T22:23:23Z) - Zero-shot Adversarial Quantization [11.722728148523366]
We propose a zero-shot adversarial quantization (ZAQ) framework, facilitating effective discrepancy estimation and knowledge transfer.
This is achieved by a novel two-level discrepancy modeling to drive a generator to synthesize informative and diverse data examples.
We conduct extensive experiments on three fundamental vision tasks, demonstrating the superiority of ZAQ over the strong zero-shot baselines.
arXiv Detail & Related papers (2021-03-29T01:33:34Z) - Foundations of Bayesian Learning from Synthetic Data [1.6249267147413522]
We use a Bayesian paradigm to characterise the updating of model parameters when learning on synthetic data.
Recent results from general Bayesian updating support a novel and robust approach to synthetic-learning founded on decision theory.
arXiv Detail & Related papers (2020-11-16T21:49:17Z) - Generative Low-bitwidth Data Free Quantization [44.613912463011545]
We propose Generative Low-bitwidth Data Free Quantization (GDFQ) to remove the data dependence burden.
With the help of generated data, we can quantize a model by learning knowledge from the pre-trained model.
Our method achieves much higher accuracy on 4-bit quantization than the existing data free quantization method.
arXiv Detail & Related papers (2020-03-07T16:38:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.