Improving Convergence for Quantum Variational Classifiers using Weight
Re-Mapping
- URL: http://arxiv.org/abs/2212.14807v1
- Date: Thu, 22 Dec 2022 13:23:19 GMT
- Title: Improving Convergence for Quantum Variational Classifiers using Weight
Re-Mapping
- Authors: Michael K\"olle, Alessandro Giovagnoli, Jonas Stein, Maximilian
Balthasar Mansky, Julian Hager and Claudia Linnhoff-Popien
- Abstract summary: In recent years, quantum machine learning has seen a substantial increase in the use of variational quantum circuits (VQCs)
We introduce weight re-mapping for VQCs, to unambiguously map the weights to an interval of length $2pi$.
We demonstrate that weight re-mapping increased test accuracy for the Wine dataset by $10%$ over using unmodified weights.
- Score: 60.086820254217336
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, quantum machine learning has seen a substantial increase in
the use of variational quantum circuits (VQCs). VQCs are inspired by artificial
neural networks, which achieve extraordinary performance in a wide range of AI
tasks as massively parameterized function approximators. VQCs have already
demonstrated promising results, for example, in generalization and the
requirement for fewer parameters to train, by utilizing the more robust
algorithmic toolbox available in quantum computing. A VQCs' trainable
parameters or weights are usually used as angles in rotational gates and
current gradient-based training methods do not account for that. We introduce
weight re-mapping for VQCs, to unambiguously map the weights to an interval of
length $2\pi$, drawing inspiration from traditional ML, where data rescaling,
or normalization techniques have demonstrated tremendous benefits in many
circumstances. We employ a set of five functions and evaluate them on the Iris
and Wine datasets using variational classifiers as an example. Our experiments
show that weight re-mapping can improve convergence in all tested settings.
Additionally, we were able to demonstrate that weight re-mapping increased test
accuracy for the Wine dataset by $10\%$ over using unmodified weights.
Related papers
- Leveraging Pre-Trained Neural Networks to Enhance Machine Learning with Variational Quantum Circuits [48.33631905972908]
We introduce an innovative approach that utilizes pre-trained neural networks to enhance Variational Quantum Circuits (VQC)
This technique effectively separates approximation error from qubit count and removes the need for restrictive conditions.
Our results extend to applications such as human genome analysis, demonstrating the broad applicability of our approach.
arXiv Detail & Related papers (2024-11-13T12:03:39Z) - GWQ: Gradient-Aware Weight Quantization for Large Language Models [61.17678373122165]
gradient-aware weight quantization (GWQ) is the first quantization approach for low-bit weight quantization that leverages gradients to localize outliers.
GWQ retains the corresponding to the top 1% outliers preferentially at FP16 precision, while the remaining non-outlier weights are stored in a low-bit format.
In the zero-shot task, GWQ quantized models have higher accuracy compared to other quantization methods.
arXiv Detail & Related papers (2024-10-30T11:16:04Z) - Post-Training Quantization for Re-parameterization via Coarse & Fine
Weight Splitting [13.270381125055275]
We propose a coarse & fine weight splitting (CFWS) method to reduce quantization error of weight.
We develop an improved KL metric to determine optimal quantization scales for activation.
For example, the quantized RepVGG-A1 model exhibits a mere 0.3% accuracy loss.
arXiv Detail & Related papers (2023-12-17T02:31:20Z) - Weight Re-Mapping for Variational Quantum Algorithms [54.854986762287126]
We introduce the concept of weight re-mapping for variational quantum circuits (VQCs)
We employ seven distinct weight re-mapping functions to assess their impact on eight classification datasets.
Our results indicate that weight re-mapping can enhance the convergence speed of the VQC.
arXiv Detail & Related papers (2023-06-09T09:42:21Z) - RepQ-ViT: Scale Reparameterization for Post-Training Quantization of
Vision Transformers [2.114921680609289]
We propose RepQ-ViT, a novel PTQ framework for vision transformers (ViTs)
RepQ-ViT decouples the quantization and inference processes.
It can outperform existing strong baselines and encouragingly improve the accuracy of 4-bit PTQ of ViTs to a usable level.
arXiv Detail & Related papers (2022-12-16T02:52:37Z) - Tensor Ring Parametrized Variational Quantum Circuits for Large Scale
Quantum Machine Learning [28.026962110693695]
We propose an algorithm that compresses the quantum state within the circuit using a tensor ring representation.
The storage and computational time increases linearly in the number of qubits and number of layers, as compared to the exponential increase with exact simulation algorithms.
We achieve a test accuracy of 83.33% on Iris dataset and a maximum of 99.30% and 76.31% on binary and ternary classification of MNIST dataset.
arXiv Detail & Related papers (2022-01-21T19:54:57Z) - Subtleties in the trainability of quantum machine learning models [0.0]
We show that gradient scaling results for Variational Quantum Algorithms can be applied to study the gradient scaling of Quantum Machine Learning models.
Our results indicate that features deemed detrimental for VQA trainability can also lead to issues such as barren plateaus in QML.
arXiv Detail & Related papers (2021-10-27T20:28:53Z) - Direct Quantization for Training Highly Accurate Low Bit-width Deep
Neural Networks [73.29587731448345]
This paper proposes two novel techniques to train deep convolutional neural networks with low bit-width weights and activations.
First, to obtain low bit-width weights, most existing methods obtain the quantized weights by performing quantization on the full-precision network weights.
Second, to obtain low bit-width activations, existing works consider all channels equally.
arXiv Detail & Related papers (2020-12-26T15:21:18Z) - Characterizing the loss landscape of variational quantum circuits [77.34726150561087]
We introduce a way to compute the Hessian of the loss function of VQCs.
We show how this information can be interpreted and compared to classical neural networks.
arXiv Detail & Related papers (2020-08-06T17:48:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.