In-Distribution Consistency Regularization Improves the Generalization of Quantization-Aware Training
- URL: http://arxiv.org/abs/2402.13497v2
- Date: Sun, 12 Jan 2025 14:15:26 GMT
- Title: In-Distribution Consistency Regularization Improves the Generalization of Quantization-Aware Training
- Authors: Junbiao Pang, Tianyang Cai, Baochang Zhang, Jiaqi Wu,
- Abstract summary: We propose Consistency Regularization (CR) to improve the generalization ability of Quantization-Aware Training (QAT)
Our approach significantly outperforms current state-of-the-art QAT methods and even the FP counterparts.
- Score: 16.475151881506914
- License:
- Abstract: Although existing Quantization-Aware Training (QAT) methods intensively depend on knowledge distillation to guarantee performance, QAT still suffers from severe performance drop. The experiments have shown that vanilla quantization is sensitive to the perturbation from both the input and weights. Therefore, we assume that the generalization ability of QAT is predominantly caused by both the intrinsic instability (training time) and the limited generalization ability (testing time). In this paper, we address both issues from a new perspective by leveraging Consistency Regularization (CR) to improve the generalization ability of QAT. Empirical results and theoretical analysis verify that CR would bring a good generalization ability to different network architectures and various QAT methods. Extensive experiments demonstrate that our approach significantly outperforms current state-of-the-art QAT methods and even the FP counterparts. On CIFAR-10, the proposed method improves by 3.79% compared to the baseline method using ResNet18, and improves by 3.84% compared to the baseline method using the lightweight model MobileNet.
Related papers
- From Reward Shaping to Q-Shaping: Achieving Unbiased Learning with LLM-Guided Knowledge [0.0]
Q-shaping is an alternative to reward shaping for incorporating domain knowledge to accelerate agent training.
We evaluated Q-shaping across 20 different environments using a large language model (LLM) as the provider.
arXiv Detail & Related papers (2024-10-02T12:10:07Z) - Criticality Leveraged Adversarial Training (CLAT) for Boosted Performance via Parameter Efficiency [15.211462468655329]
CLAT introduces parameter efficiency into the adversarial training process, improving both clean accuracy and adversarial robustness.
It can be applied on top of existing adversarial training methods, significantly reducing the number of trainable parameters by approximately 95%.
arXiv Detail & Related papers (2024-08-19T17:58:03Z) - Augmenting Unsupervised Reinforcement Learning with Self-Reference [63.68018737038331]
Humans possess the ability to draw on past experiences explicitly when learning new tasks.
We propose the Self-Reference (SR) approach, an add-on module explicitly designed to leverage historical information.
Our approach achieves state-of-the-art results in terms of Interquartile Mean (IQM) performance and Optimality Gap reduction on the Unsupervised Reinforcement Learning Benchmark.
arXiv Detail & Related papers (2023-11-16T09:07:34Z) - Understanding, Predicting and Better Resolving Q-Value Divergence in
Offline-RL [86.0987896274354]
We first identify a fundamental pattern, self-excitation, as the primary cause of Q-value estimation divergence in offline RL.
We then propose a novel Self-Excite Eigenvalue Measure (SEEM) metric to measure the evolving property of Q-network at training.
For the first time, our theory can reliably decide whether the training will diverge at an early stage.
arXiv Detail & Related papers (2023-10-06T17:57:44Z) - Poster: Self-Supervised Quantization-Aware Knowledge Distillation [6.463799944811755]
Quantization-aware training (QAT) starts with a pre-trained full-precision model and performs quantization during retraining.
Existing QAT works require supervision from the labels and they suffer from accuracy loss due to reduced precision.
This paper proposes a novel Self-Supervised Quantization-Aware Knowledge Distillation framework (SQAKD)
arXiv Detail & Related papers (2023-09-22T23:52:58Z) - Weight Re-Mapping for Variational Quantum Algorithms [54.854986762287126]
We introduce the concept of weight re-mapping for variational quantum circuits (VQCs)
We employ seven distinct weight re-mapping functions to assess their impact on eight classification datasets.
Our results indicate that weight re-mapping can enhance the convergence speed of the VQC.
arXiv Detail & Related papers (2023-06-09T09:42:21Z) - Quantum circuit architecture search on a superconducting processor [56.04169357427682]
Variational quantum algorithms (VQAs) have shown strong evidences to gain provable computational advantages for diverse fields such as finance, machine learning, and chemistry.
However, the ansatz exploited in modern VQAs is incapable of balancing the tradeoff between expressivity and trainability.
We demonstrate the first proof-of-principle experiment of applying an efficient automatic ansatz design technique to enhance VQAs on an 8-qubit superconducting quantum processor.
arXiv Detail & Related papers (2022-01-04T01:53:42Z) - QAFactEval: Improved QA-Based Factual Consistency Evaluation for
Summarization [116.56171113972944]
We show that carefully choosing the components of a QA-based metric is critical to performance.
Our solution improves upon the best-performing entailment-based metric and achieves state-of-the-art performance.
arXiv Detail & Related papers (2021-12-16T00:38:35Z) - Cross Learning in Deep Q-Networks [82.20059754270302]
We propose a novel cross Q-learning algorithm, aim at alleviating the well-known overestimation problem in value-based reinforcement learning methods.
Our algorithm builds on double Q-learning, by maintaining a set of parallel models and estimate the Q-value based on a randomly selected network.
arXiv Detail & Related papers (2020-09-29T04:58:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.