Learning to learn with an evolutionary strategy applied to variational
quantum algorithms
- URL: http://arxiv.org/abs/2310.17402v1
- Date: Thu, 26 Oct 2023 13:55:01 GMT
- Title: Learning to learn with an evolutionary strategy applied to variational
quantum algorithms
- Authors: Lucas Friedrich, Jonas Maziero
- Abstract summary: Variational Quantum Algorithms (VQAs) employ quantum circuits parameterized by $U$, optimized using classical methods to minimize a cost function.
In this article, we introduce a novel optimization approach named Learning to Learn with an Evolutionary Strategy'' (LLES)
LLES treats optimization as a learning problem, utilizing recurrent neural networks to iteratively propose VQA parameters.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Variational Quantum Algorithms (VQAs) employ quantum circuits parameterized
by $U$, optimized using classical methods to minimize a cost function. While
VQAs have found broad applications, certain challenges persist. Notably, a
significant computational burden arises during parameter optimization. The
prevailing ``parameter shift rule'' mandates a double evaluation of the cost
function for each parameter. In this article, we introduce a novel optimization
approach named ``Learning to Learn with an Evolutionary Strategy'' (LLES). LLES
unifies ``Learning to Learn'' and ``Evolutionary Strategy'' methods. ``Learning
to Learn'' treats optimization as a learning problem, utilizing recurrent
neural networks to iteratively propose VQA parameters. Conversely,
``Evolutionary Strategy'' employs gradient searches to estimate function
gradients. Our optimization method is applied to two distinct tasks:
determining the ground state of an Ising Hamiltonian and training a quantum
neural network. Results underscore the efficacy of this novel approach.
Additionally, we identify a key hyperparameter that significantly influences
gradient estimation using the ``Evolutionary Strategy'' method.
Related papers
- A Study on Optimization Techniques for Variational Quantum Circuits in Reinforcement Learning [2.7504809152812695]
Researchers are focusing on variational quantum circuits (VQCs)
VQCs are hybrid algorithms that merge a quantum circuit, which can be adjusted through parameters.
Recent studies have presented new ways of applying VQCs to reinforcement learning.
arXiv Detail & Related papers (2024-05-20T20:06:42Z) - Variational quantum algorithm for enhanced continuous variable optical
phase sensing [0.0]
Variational quantum algorithms (VQAs) are hybrid quantum-classical approaches used for tackling a wide range of problems on noisy quantum devices.
We implement a variational algorithm designed for optimized parameter estimation on a continuous variable platform based on squeezed light.
arXiv Detail & Related papers (2023-12-21T14:11:05Z) - Optimization strategies in WAHTOR algorithm for quantum computing
empirical ansatz: a comparative study [0.0]
This work introduces a non-adiabatic version of the WAHTOR algorithm and compares its efficiency with three implementations.
Calculating first and second-order derivatives of the Hamiltonian at fixed VQE parameters does not introduce a prototypical QPU overload.
We find out that in the case of Hubbard model systems the trust region non-adiabatic optimization is more efficient.
arXiv Detail & Related papers (2023-06-19T15:07:55Z) - Variance-Reduced Gradient Estimation via Noise-Reuse in Online Evolution
Strategies [50.10277748405355]
Noise-Reuse Evolution Strategies (NRES) is a general class of unbiased online evolution strategies methods.
We show NRES results in faster convergence than existing AD and ES methods in terms of wall-clock time and number of steps across a variety of applications.
arXiv Detail & Related papers (2023-04-21T17:53:05Z) - Learning Sampling Policy for Faster Derivative Free Optimization [100.27518340593284]
We propose a new reinforcement learning based ZO algorithm (ZO-RL) with learning the sampling policy for generating the perturbations in ZO optimization instead of using random sampling.
Our results show that our ZO-RL algorithm can effectively reduce the variances of ZO gradient by learning a sampling policy, and converge faster than existing ZO algorithms in different scenarios.
arXiv Detail & Related papers (2021-04-09T14:50:59Z) - Gradient-free quantum optimization on NISQ devices [0.0]
We consider recent advances in weight-agnostic learning and propose a strategy that addresses the trade-off between finding appropriate circuit architectures and parameter tuning.
We investigate the use of NEAT-inspired algorithms which evaluate circuits via genetic competition and thus circumvent issues due to exceeding numbers of parameters.
arXiv Detail & Related papers (2020-12-23T10:24:54Z) - Natural Evolutionary Strategies for Variational Quantum Computation [0.7874708385247353]
Natural evolutionary strategies (NES) are a family of gradient-free black-box optimization algorithms.
This study illustrates their use for the optimization of randomly-d parametrized quantum circuits (PQCs) in the region of vanishing gradients.
arXiv Detail & Related papers (2020-11-30T21:23:38Z) - Logistic Q-Learning [87.00813469969167]
We propose a new reinforcement learning algorithm derived from a regularized linear-programming formulation of optimal control in MDPs.
The main feature of our algorithm is a convex loss function for policy evaluation that serves as a theoretically sound alternative to the widely used squared Bellman error.
arXiv Detail & Related papers (2020-10-21T17:14:31Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Variance Reduction for Deep Q-Learning using Stochastic Recursive
Gradient [51.880464915253924]
Deep Q-learning algorithms often suffer from poor gradient estimations with an excessive variance.
This paper introduces the framework for updating the gradient estimates in deep Q-learning, achieving a novel algorithm called SRG-DQN.
arXiv Detail & Related papers (2020-07-25T00:54:20Z) - AdaS: Adaptive Scheduling of Stochastic Gradients [50.80697760166045]
We introduce the notions of textit"knowledge gain" and textit"mapping condition" and propose a new algorithm called Adaptive Scheduling (AdaS)
Experimentation reveals that, using the derived metrics, AdaS exhibits: (a) faster convergence and superior generalization over existing adaptive learning methods; and (b) lack of dependence on a validation set to determine when to stop training.
arXiv Detail & Related papers (2020-06-11T16:36:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.