Parameter Setting in Quantum Approximate Optimization of Weighted
Problems
- URL: http://arxiv.org/abs/2305.15201v3
- Date: Thu, 11 Jan 2024 14:02:04 GMT
- Title: Parameter Setting in Quantum Approximate Optimization of Weighted
Problems
- Authors: Shree Hari Sureshbabu, Dylan Herman, Ruslan Shaydulin, Joao Basso,
Shouvanik Chakrabarti, Yue Sun, and Marco Pistoia
- Abstract summary: In this work, we develop parameter settings for QAOA applied to a general class of weighted problems.
First, we derive optimal parameters for QAOA with depth $p=1$ applied to the weighted MaxCut problem under different assumptions on the weights.
Second, for $pgeq 1$ we prove that the QAOA energy landscape for weighted MaxCut approaches that for the unweighted case under a simple rescaling of parameters.
Third, we propose a general rescaling scheme inspired by the analytical results for weighted MaxCut and demonstrate its effectiveness.
- Score: 10.396104620517171
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantum Approximate Optimization Algorithm (QAOA) is a leading candidate
algorithm for solving combinatorial optimization problems on quantum computers.
However, in many cases QAOA requires computationally intensive parameter
optimization. The challenge of parameter optimization is particularly acute in
the case of weighted problems, for which the eigenvalues of the phase operator
are non-integer and the QAOA energy landscape is not periodic. In this work, we
develop parameter setting heuristics for QAOA applied to a general class of
weighted problems. First, we derive optimal parameters for QAOA with depth
$p=1$ applied to the weighted MaxCut problem under different assumptions on the
weights. In particular, we rigorously prove the conventional wisdom that in the
average case the first local optimum near zero gives globally-optimal QAOA
parameters. Second, for $p\geq 1$ we prove that the QAOA energy landscape for
weighted MaxCut approaches that for the unweighted case under a simple
rescaling of parameters. Therefore, we can use parameters previously obtained
for unweighted MaxCut for weighted problems. Finally, we prove that for $p=1$
the QAOA objective sharply concentrates around its expectation, which means
that our parameter setting rules hold with high probability for a random
weighted instance. We numerically validate this approach on general weighted
graphs and show that on average the QAOA energy with the proposed fixed
parameters is only $1.1$ percentage points away from that with optimized
parameters. Third, we propose a general heuristic rescaling scheme inspired by
the analytical results for weighted MaxCut and demonstrate its effectiveness
using QAOA with the XY Hamming-weight-preserving mixer applied to the portfolio
optimization problem. Our heuristic improves the convergence of local
optimizers, reducing the number of iterations by 7.4x on average.
Related papers
- Graph Representation Learning for Parameter Transferability in Quantum Approximate Optimization Algorithm [1.0971022294548696]
The quantum approximate optimization algorithm (QAOA) is one of the most promising candidates for achieving quantum advantage through quantum-enhanced optimization.
In this work, we apply five different graph embedding techniques to determine good donor candidates for parameter transferability.
Using this technique, we effectively reduce the number of iterations required for parameter optimization, obtaining an approximate solution to the target problem with an order of magnitude speedup.
arXiv Detail & Related papers (2024-01-12T16:01:53Z) - A Parameter Setting Heuristic for the Quantum Alternating Operator
Ansatz [0.0]
We introduce a strategy for parameter setting suitable for common cases in which the number of distinct cost values grows onlyly with the problem size.
We define a Classical Homogeneous Proxy for QAOA in which Perfect Homogeneity holds exactly, and which yields information describing both states and expectation values.
For up to $3$ QAOA levels we are easily able to find parameters that match approximation ratios returned by previous globally optimized approaches.
arXiv Detail & Related papers (2022-11-17T00:18:06Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - Twisted hybrid algorithms for combinatorial optimization [68.8204255655161]
Proposed hybrid algorithms encode a cost function into a problem Hamiltonian and optimize its energy by varying over a set of states with low circuit complexity.
We show that for levels $p=2,ldots, 6$, the level $p$ can be reduced by one while roughly maintaining the expected approximation ratio.
arXiv Detail & Related papers (2022-03-01T19:47:16Z) - Parameter Transfer for Quantum Approximate Optimization of Weighted
MaxCut [2.8087801395910525]
We show that for a given QAOA depth, a single "typical" vector of QAOA parameters can be successfully transferred to weighted MaxCut instances.
This transfer leads to a median decrease in the approximation ratio of only 2.0 percentage points.
arXiv Detail & Related papers (2022-01-27T19:54:00Z) - STORM+: Fully Adaptive SGD with Momentum for Nonconvex Optimization [74.1615979057429]
We investigate non-batch optimization problems where the objective is an expectation over smooth loss functions.
Our work builds on the STORM algorithm, in conjunction with a novel approach to adaptively set the learning rate and momentum parameters.
arXiv Detail & Related papers (2021-11-01T15:43:36Z) - Parameters Fixing Strategy for Quantum Approximate Optimization
Algorithm [0.0]
We propose a strategy to give high approximation ratio on average, even at large circuit depths, by initializing QAOA with the optimal parameters obtained from the previous depths.
We test our strategy on the Max-cut problem of certain classes of graphs such as the 3-regular graphs and the Erd"os-R'enyi graphs.
arXiv Detail & Related papers (2021-08-11T15:44:16Z) - Empirical performance bounds for quantum approximate optimization [0.27998963147546135]
Quantifying performance bounds provides insight into when QAOA may be viable for solving real-world applications.
We find QAOA exceeds the Goemans-Williamson approximation ratio bound for most graphs.
The resulting data set is presented as a benchmark for establishing empirical bounds on QAOA performance.
arXiv Detail & Related papers (2021-02-12T23:12:09Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Cross Entropy Hyperparameter Optimization for Constrained Problem
Hamiltonians Applied to QAOA [68.11912614360878]
Hybrid quantum-classical algorithms such as Quantum Approximate Optimization Algorithm (QAOA) are considered as one of the most encouraging approaches for taking advantage of near-term quantum computers in practical applications.
Such algorithms are usually implemented in a variational form, combining a classical optimization method with a quantum machine to find good solutions to an optimization problem.
In this study we apply a Cross-Entropy method to shape this landscape, which allows the classical parameter to find better parameters more easily and hence results in an improved performance.
arXiv Detail & Related papers (2020-03-11T13:52:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.