Random coordinate descent: a simple alternative for optimizing parameterized quantum circuits
- URL: http://arxiv.org/abs/2311.00088v2
- Date: Fri, 28 Jun 2024 22:25:10 GMT
- Title: Random coordinate descent: a simple alternative for optimizing parameterized quantum circuits
- Authors: Zhiyan Ding, Taehee Ko, Jiahao Yao, Lin Lin, Xiantao Li,
- Abstract summary: This paper introduces a random coordinate descent algorithm as a practical and easy-to-implement alternative to the full gradient descent algorithm.
Motivated by the behavior of measurement noise in the practical optimization of parameterized quantum circuits, this paper presents an optimization problem setting amenable to analysis.
- Score: 4.112419132722306
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Variational quantum algorithms rely on the optimization of parameterized quantum circuits in noisy settings. The commonly used back-propagation procedure in classical machine learning is not directly applicable in this setting due to the collapse of quantum states after measurements. Thus, gradient estimations constitute a significant overhead in a gradient-based optimization of such quantum circuits. This paper introduces a random coordinate descent algorithm as a practical and easy-to-implement alternative to the full gradient descent algorithm. This algorithm only requires one partial derivative at each iteration. Motivated by the behavior of measurement noise in the practical optimization of parameterized quantum circuits, this paper presents an optimization problem setting that is amenable to analysis. Under this setting, the random coordinate descent algorithm exhibits the same level of stochastic stability as the full gradient approach, making it as resilient to noise. The complexity of the random coordinate descent method is generally no worse than that of the gradient descent and can be much better for various quantum optimization problems with anisotropic Lipschitz constants. Theoretical analysis and extensive numerical experiments validate our findings.
Related papers
- Denoising Gradient Descent in Variational Quantum Algorithms [0.0]
We introduce an algorithm for mitigating the adverse effects of noise on gradient descent in variational quantum algorithms.
We empirically demonstrate the advantages offered by our algorithm on randomized parametrized quantum circuits.
arXiv Detail & Related papers (2024-03-06T16:15:25Z) - Parsimonious Optimisation of Parameters in Variational Quantum Circuits [1.303764728768944]
We propose a novel Quantum-Gradient Sampling that requires the execution of at most two circuits per iteration to update the optimisable parameters.
Our proposed method achieves similar convergence rates to classical gradient descent, and empirically outperforms gradient coordinate descent, and SPSA.
arXiv Detail & Related papers (2023-06-20T18:50:18Z) - Gradient-Free optimization algorithm for single-qubit quantum classifier [0.3314882635954752]
A gradient-free optimization algorithm is proposed to overcome the effects of barren plateau caused by quantum devices.
The proposed algorithm is demonstrated for a classification task and is compared with that using Adam.
The proposed gradient-free optimization algorithm can reach a high accuracy faster than that using Adam.
arXiv Detail & Related papers (2022-05-10T08:45:03Z) - Twisted hybrid algorithms for combinatorial optimization [68.8204255655161]
Proposed hybrid algorithms encode a cost function into a problem Hamiltonian and optimize its energy by varying over a set of states with low circuit complexity.
We show that for levels $p=2,ldots, 6$, the level $p$ can be reduced by one while roughly maintaining the expected approximation ratio.
arXiv Detail & Related papers (2022-03-01T19:47:16Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - Single-component gradient rules for variational quantum algorithms [1.3047205680129093]
A common bottleneck of any such algorithm is constituted by the optimization of the variational parameters.
A popular set of optimization methods work on the estimate of the gradient, obtained by means of circuit evaluations.
This work provides a comprehensive picture of the family of gradient rules that vary parameters of quantum gates individually.
arXiv Detail & Related papers (2021-06-02T18:00:10Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Measuring Analytic Gradients of General Quantum Evolution with the
Stochastic Parameter Shift Rule [0.0]
We study the problem of estimating the gradient of the function to be optimized directly from quantum measurements.
We derive a mathematically exact formula that provides an algorithm for estimating the gradient of any multi-qubit parametric quantum evolution.
Our algorithm continues to work, although with some approximations, even when all the available quantum gates are noisy.
arXiv Detail & Related papers (2020-05-20T18:24:11Z) - Cross Entropy Hyperparameter Optimization for Constrained Problem
Hamiltonians Applied to QAOA [68.11912614360878]
Hybrid quantum-classical algorithms such as Quantum Approximate Optimization Algorithm (QAOA) are considered as one of the most encouraging approaches for taking advantage of near-term quantum computers in practical applications.
Such algorithms are usually implemented in a variational form, combining a classical optimization method with a quantum machine to find good solutions to an optimization problem.
In this study we apply a Cross-Entropy method to shape this landscape, which allows the classical parameter to find better parameters more easily and hence results in an improved performance.
arXiv Detail & Related papers (2020-03-11T13:52:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.