Latency considerations for stochastic optimizers in variational quantum
algorithms
- URL: http://arxiv.org/abs/2201.13438v3
- Date: Mon, 17 Oct 2022 16:21:42 GMT
- Title: Latency considerations for stochastic optimizers in variational quantum
algorithms
- Authors: Matt Menickelly, Yunsoo Ha, and Matthew Otten
- Abstract summary: Variational quantum algorithms, which have risen to prominence in the noisy intermediate noisy quantum-scale setting, require the implementation of hardware.
To date, most research has employed algorithms based on the gradient iteration as the classical iteration.
In this work we propose instead using optimization algorithms that yield processes emulating the dynamics of classical deterministic algorithms.
- Score: 0.02294014185517203
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Variational quantum algorithms, which have risen to prominence in the noisy
intermediate-scale quantum setting, require the implementation of a stochastic
optimizer on classical hardware. To date, most research has employed algorithms
based on the stochastic gradient iteration as the stochastic classical
optimizer. In this work we propose instead using stochastic optimization
algorithms that yield stochastic processes emulating the dynamics of classical
deterministic algorithms. This approach results in methods with theoretically
superior worst-case iteration complexities, at the expense of greater
per-iteration sample (shot) complexities. We investigate this trade-off both
theoretically and empirically and conclude that preferences for a choice of
stochastic optimizer should explicitly depend on a function of both latency and
shot execution times.
Related papers
- Random coordinate descent: a simple alternative for optimizing parameterized quantum circuits [4.112419132722306]
This paper introduces a random coordinate descent algorithm as a practical and easy-to-implement alternative to the full gradient descent algorithm.
Motivated by the behavior of measurement noise in the practical optimization of parameterized quantum circuits, this paper presents an optimization problem setting amenable to analysis.
arXiv Detail & Related papers (2023-10-31T18:55:45Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - On Adaptivity in Non-stationary Stochastic Optimization With Bandit
Feedback [11.208914594208654]
We show that, when aggregated function changes is known a priori, a simple re-starting algorithm attains the optimal dynamic regret.
We also establish an additional result showing that any algorithm achieving good regret against stationary benchmarks with high probability could be automatically converted to an algorithm that achieves good regret against dynamic benchmarks.
arXiv Detail & Related papers (2022-10-11T16:16:34Z) - Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
with List Stability [107.65337427333064]
optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning.
In this work, we present the first trial in the single-dependent generalization of AUPRC optimization.
Experiments on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
arXiv Detail & Related papers (2022-09-27T09:06:37Z) - Surrogate-based optimization for variational quantum algorithms [0.0]
Variational quantum algorithms are a class of techniques intended to be used on near-term quantum computers.
We introduce the idea of learning surrogate models for variational circuits using few experimental measurements.
We then perform parameter optimization using these models as opposed to the original data.
arXiv Detail & Related papers (2022-04-12T00:15:17Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - Momentum Accelerates the Convergence of Stochastic AUPRC Maximization [80.8226518642952]
We study optimization of areas under precision-recall curves (AUPRC), which is widely used for imbalanced tasks.
We develop novel momentum methods with a better iteration of $O (1/epsilon4)$ for finding an $epsilon$stationary solution.
We also design a novel family of adaptive methods with the same complexity of $O (1/epsilon4)$, which enjoy faster convergence in practice.
arXiv Detail & Related papers (2021-07-02T16:21:52Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Cross Entropy Hyperparameter Optimization for Constrained Problem
Hamiltonians Applied to QAOA [68.11912614360878]
Hybrid quantum-classical algorithms such as Quantum Approximate Optimization Algorithm (QAOA) are considered as one of the most encouraging approaches for taking advantage of near-term quantum computers in practical applications.
Such algorithms are usually implemented in a variational form, combining a classical optimization method with a quantum machine to find good solutions to an optimization problem.
In this study we apply a Cross-Entropy method to shape this landscape, which allows the classical parameter to find better parameters more easily and hence results in an improved performance.
arXiv Detail & Related papers (2020-03-11T13:52:41Z) - On Hyper-parameter Tuning for Stochastic Optimization Algorithms [28.88646928299302]
This paper proposes the first-ever algorithmic framework for tuning hyper-parameters of optimization algorithm based on reinforcement learning.
The proposed framework can be used as a standard tool for hyper-parameter tuning in algorithms.
arXiv Detail & Related papers (2020-03-04T12:29:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.