A Study on Optimization Techniques for Variational Quantum Circuits in Reinforcement Learning
- URL: http://arxiv.org/abs/2405.12354v1
- Date: Mon, 20 May 2024 20:06:42 GMT
- Title: A Study on Optimization Techniques for Variational Quantum Circuits in Reinforcement Learning
- Authors: Michael Kölle, Timo Witter, Tobias Rohe, Gerhard Stenzel, Philipp Altmann, Thomas Gabor,
- Abstract summary: Researchers are focusing on variational quantum circuits (VQCs)
VQCs are hybrid algorithms that merge a quantum circuit, which can be adjusted through parameters.
Recent studies have presented new ways of applying VQCs to reinforcement learning.
- Score: 2.7504809152812695
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantum Computing aims to streamline machine learning, making it more effective with fewer trainable parameters. This reduction of parameters can speed up the learning process and reduce the use of computational resources. However, in the current phase of quantum computing development, known as the noisy intermediate-scale quantum era (NISQ), learning is difficult due to a limited number of qubits and widespread quantum noise. To overcome these challenges, researchers are focusing on variational quantum circuits (VQCs). VQCs are hybrid algorithms that merge a quantum circuit, which can be adjusted through parameters, with traditional classical optimization techniques. These circuits require only few qubits for effective learning. Recent studies have presented new ways of applying VQCs to reinforcement learning, showing promising results that warrant further exploration. This study investigates the effects of various techniques -- data re-uploading, input scaling, output scaling -- and introduces exponential learning rate decay in the quantum proximal policy optimization algorithm's actor-VQC. We assess these methods in the popular Frozen Lake and Cart Pole environments. Our focus is on their ability to reduce the number of parameters in the VQC without losing effectiveness. Our findings indicate that data re-uploading and an exponential learning rate decay significantly enhance hyperparameter stability and overall performance. While input scaling does not improve parameter efficiency, output scaling effectively manages greediness, leading to increased learning speed and robustness.
Related papers
- NN-AE-VQE: Neural network parameter prediction on autoencoded variational quantum eigensolvers [1.7400502482492273]
In recent years, the field of quantum computing has become significantly more mature.
We present an auto-encoded VQE with neural-network predictions: NN-AE-VQE.
We demonstrate these methods on a $H$ molecule, achieving chemical accuracy.
arXiv Detail & Related papers (2024-11-23T23:09:22Z) - Leveraging Pre-Trained Neural Networks to Enhance Machine Learning with Variational Quantum Circuits [48.33631905972908]
We introduce an innovative approach that utilizes pre-trained neural networks to enhance Variational Quantum Circuits (VQC)
This technique effectively separates approximation error from qubit count and removes the need for restrictive conditions.
Our results extend to applications such as human genome analysis, demonstrating the broad applicability of our approach.
arXiv Detail & Related papers (2024-11-13T12:03:39Z) - Bayesian Parameterized Quantum Circuit Optimization (BPQCO): A task and hardware-dependent approach [49.89480853499917]
Variational quantum algorithms (VQA) have emerged as a promising quantum alternative for solving optimization and machine learning problems.
In this paper, we experimentally demonstrate the influence of the circuit design on the performance obtained for two classification problems.
We also study the degradation of the obtained circuits in the presence of noise when simulating real quantum computers.
arXiv Detail & Related papers (2024-04-17T11:00:12Z) - Near-Term Distributed Quantum Computation using Mean-Field Corrections
and Auxiliary Qubits [77.04894470683776]
We propose near-term distributed quantum computing that involve limited information transfer and conservative entanglement production.
We build upon these concepts to produce an approximate circuit-cutting technique for the fragmented pre-training of variational quantum algorithms.
arXiv Detail & Related papers (2023-09-11T18:00:00Z) - Learning To Optimize Quantum Neural Network Without Gradients [3.9848482919377006]
We introduce a novel meta-optimization algorithm that trains a emphmeta-optimizer network to output parameters for the quantum circuit.
We show that we achieve a better quality minima in fewer circuit evaluations than existing gradient based algorithms on different datasets.
arXiv Detail & Related papers (2023-04-15T01:09:12Z) - Quantum Imitation Learning [74.15588381240795]
We propose quantum imitation learning (QIL) with a hope to utilize quantum advantage to speed up IL.
We develop two QIL algorithms, quantum behavioural cloning (Q-BC) and quantum generative adversarial imitation learning (Q-GAIL)
Experiment results demonstrate that both Q-BC and Q-GAIL can achieve comparable performance compared to classical counterparts.
arXiv Detail & Related papers (2023-04-04T12:47:35Z) - Error mitigation in variational quantum eigensolvers using tailored
probabilistic machine learning [5.630204194930539]
We present a novel method that employs parametric Gaussian process regression (GPR) within an active learning framework to mitigate noise in quantum computations.
We demonstrate the effectiveness of our method on a 2-site Anderson impurity model and a 8-site Heisenberg model, using the IBM open-source quantum computing framework, Qiskit.
arXiv Detail & Related papers (2021-11-16T22:29:43Z) - Variational Quantum Optimization with Multi-Basis Encodings [62.72309460291971]
We introduce a new variational quantum algorithm that benefits from two innovations: multi-basis graph complexity and nonlinear activation functions.
Our results in increased optimization performance, two increase in effective landscapes and a reduction in measurement progress.
arXiv Detail & Related papers (2021-06-24T20:16:02Z) - Gradient-free quantum optimization on NISQ devices [0.0]
We consider recent advances in weight-agnostic learning and propose a strategy that addresses the trade-off between finding appropriate circuit architectures and parameter tuning.
We investigate the use of NEAT-inspired algorithms which evaluate circuits via genetic competition and thus circumvent issues due to exceeding numbers of parameters.
arXiv Detail & Related papers (2020-12-23T10:24:54Z) - Quantum circuit architecture search for variational quantum algorithms [88.71725630554758]
We propose a resource and runtime efficient scheme termed quantum architecture search (QAS)
QAS automatically seeks a near-optimal ansatz to balance benefits and side-effects brought by adding more noisy quantum gates.
We implement QAS on both the numerical simulator and real quantum hardware, via the IBM cloud, to accomplish data classification and quantum chemistry tasks.
arXiv Detail & Related papers (2020-10-20T12:06:27Z) - Layerwise learning for quantum neural networks [7.2237324920669055]
We show a layerwise learning strategy for parametrized quantum circuits.
The circuit depth is incrementally grown during optimization, and only subsets of parameters are updated in each training step.
We demonstrate our approach on an image-classification task on handwritten digits, and show that layerwise learning attains an 8% lower generalization error on average.
arXiv Detail & Related papers (2020-06-26T10:44:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.