Optimizing embedding-related quantum annealing parameters for reducing
hardware bias
- URL: http://arxiv.org/abs/2011.00719v2
- Date: Tue, 1 Dec 2020 23:25:54 GMT
- Title: Optimizing embedding-related quantum annealing parameters for reducing
hardware bias
- Authors: Aaron Barbosa, Elijah Pelofske, Georg Hahn, Hristo N. Djidjev
- Abstract summary: We show that parameter optimization can be done for an entire class of problems, given each instance uses a previously chosen fixed embedding.
Specifically, in the training phase, we fix an embedding E of a complete graph onto the hardware of the annealer, and then run an optimization algorithm to tune the following set of parameter values.
In the testing phase, we estimate how well the parameters computed during the training phase work on a random selection of other graphs from that class.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantum annealers have been designed to propose near-optimal solutions to
NP-hard optimization problems. However, the accuracy of current annealers such
as the ones of D-Wave Systems, Inc., is limited by environmental noise and
hardware biases. One way to deal with these imperfections and to improve the
quality of the annealing results is to apply a variety of pre-processing
techniques such as spin reversal (SR), anneal offsets (AO), or chain weights
(CW). Maximizing the effectiveness of these techniques involves performing
optimizations over a large number of parameters, which would be too costly if
needed to be done for each new problem instance. In this work, we show that the
aforementioned parameter optimization can be done for an entire class of
problems, given each instance uses a previously chosen fixed embedding.
Specifically, in the training phase, we fix an embedding E of a complete graph
onto the hardware of the annealer, and then run an optimization algorithm to
tune the following set of parameter values: the set of bits to be flipped for
SR, the specific qubit offsets for AO, and the distribution of chain weights,
optimized over a set of training graphs randomly chosen from that class, where
the graphs are embedded onto the hardware using E. In the testing phase, we
estimate how well the parameters computed during the training phase work on a
random selection of other graphs from that class. We investigate graph
instances of varying densities for the Maximum Clique, Maximum Cut, and Graph
Partitioning problems. Our results indicate that, compared to their default
behavior, substantial improvements of the annealing results can be achieved by
using the optimized parameters for SR, AO, and CW.
Related papers
- Graph Representation Learning for Parameter Transferability in Quantum Approximate Optimization Algorithm [1.0971022294548696]
The quantum approximate optimization algorithm (QAOA) is one of the most promising candidates for achieving quantum advantage through quantum-enhanced optimization.
In this work, we apply five different graph embedding techniques to determine good donor candidates for parameter transferability.
Using this technique, we effectively reduce the number of iterations required for parameter optimization, obtaining an approximate solution to the target problem with an order of magnitude speedup.
arXiv Detail & Related papers (2024-01-12T16:01:53Z) - Reducing measurement costs by recycling the Hessian in adaptive variational quantum algorithms [0.0]
We propose an improved quasi-Newton optimization protocol specifically tailored to adaptive VQAs.
We implement a quasi-Newton algorithm where an approximation to the inverse Hessian matrix is continuously built and grown across the iterations of an adaptive VQA.
arXiv Detail & Related papers (2024-01-10T14:08:04Z) - Proactively incremental-learning QAOA [9.677961025372115]
We propose an advanced Quantum Approximate Optimization Algorithm (QAOA) based on incremental learning.
Our method has superior performance on Approximation Ratio (AR) and training time compared to prevalent works of QAOA.
arXiv Detail & Related papers (2023-11-04T02:15:26Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - Twisted hybrid algorithms for combinatorial optimization [68.8204255655161]
Proposed hybrid algorithms encode a cost function into a problem Hamiltonian and optimize its energy by varying over a set of states with low circuit complexity.
We show that for levels $p=2,ldots, 6$, the level $p$ can be reduced by one while roughly maintaining the expected approximation ratio.
arXiv Detail & Related papers (2022-03-01T19:47:16Z) - Parameters Fixing Strategy for Quantum Approximate Optimization
Algorithm [0.0]
We propose a strategy to give high approximation ratio on average, even at large circuit depths, by initializing QAOA with the optimal parameters obtained from the previous depths.
We test our strategy on the Max-cut problem of certain classes of graphs such as the 3-regular graphs and the Erd"os-R'enyi graphs.
arXiv Detail & Related papers (2021-08-11T15:44:16Z) - Transferability of optimal QAOA parameters between random graphs [3.321726991033431]
We show that convergence of the optimal QAOA parameters around specific values can be explained and predicted based on the local properties of the graphs.
We demonstrate how optimized parameters calculated for a 6-node random graph can be successfully used without modification as nearly optimal parameters for a 64-node random graph.
arXiv Detail & Related papers (2021-06-14T15:57:47Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.