Weight Re-Mapping for Variational Quantum Algorithms
- URL: http://arxiv.org/abs/2306.05776v1
- Date: Fri, 9 Jun 2023 09:42:21 GMT
- Title: Weight Re-Mapping for Variational Quantum Algorithms
- Authors: Michael K\"olle, Alessandro Giovagnoli, Jonas Stein, Maximilian
Balthasar Mansky, Julian Hager, Tobias Rohe, Robert M\"uller and Claudia
Linnhoff-Popien
- Abstract summary: We introduce the concept of weight re-mapping for variational quantum circuits (VQCs)
We employ seven distinct weight re-mapping functions to assess their impact on eight classification datasets.
Our results indicate that weight re-mapping can enhance the convergence speed of the VQC.
- Score: 54.854986762287126
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inspired by the remarkable success of artificial neural networks across a
broad spectrum of AI tasks, variational quantum circuits (VQCs) have recently
seen an upsurge in quantum machine learning applications. The promising
outcomes shown by VQCs, such as improved generalization and reduced parameter
training requirements, are attributed to the robust algorithmic capabilities of
quantum computing. However, the current gradient-based training approaches for
VQCs do not adequately accommodate the fact that trainable parameters (or
weights) are typically used as angles in rotational gates. To address this, we
extend the concept of weight re-mapping for VQCs, as introduced by K\"olle et
al. (2023). This approach unambiguously maps the weights to an interval of
length $2\pi$, mirroring data rescaling techniques in conventional machine
learning that have proven to be highly beneficial in numerous scenarios. In our
study, we employ seven distinct weight re-mapping functions to assess their
impact on eight classification datasets, using variational classifiers as a
representative example. Our results indicate that weight re-mapping can enhance
the convergence speed of the VQC. We assess the efficacy of various re-mapping
functions across all datasets and measure their influence on the VQC's average
performance. Our findings indicate that weight re-mapping not only consistently
accelerates the convergence of VQCs, regardless of the specific re-mapping
function employed, but also significantly increases accuracy in certain cases.
Related papers
- Enhancing the performance of Variational Quantum Classifiers with hybrid autoencoders [0.0]
We propose an alternative method which reduces the dimensionality of a given dataset by taking into account the specific quantum embedding that comes after.
This method aspires to make quantum machine learning with VQCs more versatile and effective on datasets of high dimension.
arXiv Detail & Related papers (2024-09-05T08:51:20Z) - SQUAT: Stateful Quantization-Aware Training in Recurrent Spiking Neural Networks [1.0923877073891446]
Spiking neural networks (SNNs) share the goal of enhancing efficiency, but adopt an 'event-driven' approach to reduce the power consumption of neural network inference.
This paper introduces two QAT schemes for stateful neurons: (i) a uniform quantization strategy, an established method for weight quantization, and (ii) threshold-centered quantization.
Our results show that increasing the density of quantization levels around the firing threshold improves accuracy across several benchmark datasets.
arXiv Detail & Related papers (2024-04-15T03:07:16Z) - VQC-Based Reinforcement Learning with Data Re-uploading: Performance and
Trainability [0.0]
Reinforcement Learning (RL) consists of designing agents that make intelligent decisions without human supervision.
Deep Q-Learning, a RL algorithm that uses Deep NNs, achieved super-human performance in some specific tasks.
It is also possible to use Variational Quantum Circuits (VQCs) as function approximators in RL algorithms.
arXiv Detail & Related papers (2024-01-21T18:00:15Z) - Post-Training Quantization for Re-parameterization via Coarse & Fine
Weight Splitting [13.270381125055275]
We propose a coarse & fine weight splitting (CFWS) method to reduce quantization error of weight.
We develop an improved KL metric to determine optimal quantization scales for activation.
For example, the quantized RepVGG-A1 model exhibits a mere 0.3% accuracy loss.
arXiv Detail & Related papers (2023-12-17T02:31:20Z) - Classical-to-Quantum Transfer Learning Facilitates Machine Learning with Variational Quantum Circuit [62.55763504085508]
We prove that a classical-to-quantum transfer learning architecture using a Variational Quantum Circuit (VQC) improves the representation and generalization (estimation error) capabilities of the VQC model.
We show that the architecture of classical-to-quantum transfer learning leverages pre-trained classical generative AI models, making it easier to find the optimal parameters for the VQC in the training stage.
arXiv Detail & Related papers (2023-05-18T03:08:18Z) - Improving Convergence for Quantum Variational Classifiers using Weight
Re-Mapping [60.086820254217336]
In recent years, quantum machine learning has seen a substantial increase in the use of variational quantum circuits (VQCs)
We introduce weight re-mapping for VQCs, to unambiguously map the weights to an interval of length $2pi$.
We demonstrate that weight re-mapping increased test accuracy for the Wine dataset by $10%$ over using unmodified weights.
arXiv Detail & Related papers (2022-12-22T13:23:19Z) - BiTAT: Neural Network Binarization with Task-dependent Aggregated
Transformation [116.26521375592759]
Quantization aims to transform high-precision weights and activations of a given neural network into low-precision weights/activations for reduced memory usage and computation.
Extreme quantization (1-bit weight/1-bit activations) of compactly-designed backbone architectures results in severe performance degeneration.
This paper proposes a novel Quantization-Aware Training (QAT) method that can effectively alleviate performance degeneration.
arXiv Detail & Related papers (2022-07-04T13:25:49Z) - Quantum circuit architecture search on a superconducting processor [56.04169357427682]
Variational quantum algorithms (VQAs) have shown strong evidences to gain provable computational advantages for diverse fields such as finance, machine learning, and chemistry.
However, the ansatz exploited in modern VQAs is incapable of balancing the tradeoff between expressivity and trainability.
We demonstrate the first proof-of-principle experiment of applying an efficient automatic ansatz design technique to enhance VQAs on an 8-qubit superconducting quantum processor.
arXiv Detail & Related papers (2022-01-04T01:53:42Z) - Post-Training Quantization for Vision Transformer [85.57953732941101]
We present an effective post-training quantization algorithm for reducing the memory storage and computational costs of vision transformers.
We can obtain an 81.29% top-1 accuracy using DeiT-B model on ImageNet dataset with about 8-bit quantization.
arXiv Detail & Related papers (2021-06-27T06:27:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.