Graph neural network initialisation of quantum approximate optimisation
- URL: http://arxiv.org/abs/2111.03016v1
- Date: Thu, 4 Nov 2021 17:19:08 GMT
- Title: Graph neural network initialisation of quantum approximate optimisation
- Authors: Nishant Jain, Brian Coyle, Elham Kashefi, Niraj Kumar
- Abstract summary: We focus on the quantum approximate optimisation algorithm (QAOA) for solving the Max-Cut problem.
We address two problems in the QAOA, how to select initial parameters, and how to subsequently train the parameters to find an optimal solution.
- Score: 2.064612766965483
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Approximate combinatorial optimisation has emerged as one of the most
promising application areas for quantum computers, particularly those in the
near term. In this work, we focus on the quantum approximate optimisation
algorithm (QAOA) for solving the Max-Cut problem. Specifically, we address two
problems in the QAOA, how to select initial parameters, and how to subsequently
train the parameters to find an optimal solution. For the former, we propose
graph neural networks (GNNs) as an initialisation routine for the QAOA
parameters, adding to the literature on warm-starting techniques. We show the
GNN approach generalises across not only graph instances, but also to
increasing graph sizes, a feature not available to other warm-starting
techniques. For training the QAOA, we test several optimisers for the MaxCut
problem. These include quantum aware/agnostic optimisers proposed in literature
and we also incorporate machine learning techniques such as reinforcement and
meta-learning. With the incorporation of these initialisation and optimisation
toolkits, we demonstrate how the QAOA can be trained as an end-to-end
differentiable pipeline.
Related papers
- Graph Representation Learning for Parameter Transferability in Quantum Approximate Optimization Algorithm [1.0971022294548696]
The quantum approximate optimization algorithm (QAOA) is one of the most promising candidates for achieving quantum advantage through quantum-enhanced optimization.
In this work, we apply five different graph embedding techniques to determine good donor candidates for parameter transferability.
Using this technique, we effectively reduce the number of iterations required for parameter optimization, obtaining an approximate solution to the target problem with an order of magnitude speedup.
arXiv Detail & Related papers (2024-01-12T16:01:53Z) - Towards Theoretically Inspired Neural Initialization Optimization [66.04735385415427]
We propose a differentiable quantity, named GradCosine, with theoretical insights to evaluate the initial state of a neural network.
We show that both the training and test performance of a network can be improved by maximizing GradCosine under norm constraint.
Generalized from the sample-wise analysis into the real batch setting, NIO is able to automatically look for a better initialization with negligible cost.
arXiv Detail & Related papers (2022-10-12T06:49:16Z) - Iterative-Free Quantum Approximate Optimization Algorithm Using Neural
Networks [20.051757447006043]
We propose a practical method that uses a simple, fully connected neural network to find better parameters tailored to a new given problem instance.
Our method is consistently the fastest to converge while also the best final result.
arXiv Detail & Related papers (2022-08-21T14:05:11Z) - Meta-Learning Digitized-Counterdiabatic Quantum Optimization [3.0638256603183054]
We tackle the problem of finding suitable initial parameters for variational optimization by employing a meta-learning technique using recurrent neural networks.
We investigate this technique with the recently proposed digitized-counterdiabatic quantum approximate optimization algorithm (DC-QAOA)
The combination of meta learning and DC-QAOA enables us to find optimal initial parameters for different models, such as MaxCut problem and the Sherrington-Kirkpatrick model.
arXiv Detail & Related papers (2022-06-20T18:57:50Z) - Adiabatic Quantum Computing for Multi Object Tracking [170.8716555363907]
Multi-Object Tracking (MOT) is most often approached in the tracking-by-detection paradigm, where object detections are associated through time.
As these optimization problems are often NP-hard, they can only be solved exactly for small instances on current hardware.
We show that our approach is competitive compared with state-of-the-art optimization-based approaches, even when using of-the-shelf integer programming solvers.
arXiv Detail & Related papers (2022-02-17T18:59:20Z) - Scaling Quantum Approximate Optimization on Near-term Hardware [49.94954584453379]
We quantify scaling of the expected resource requirements by optimized circuits for hardware architectures with varying levels of connectivity.
We show the number of measurements, and hence total time to synthesizing solution, grows exponentially in problem size and problem graph degree.
These problems may be alleviated by increasing hardware connectivity or by recently proposed modifications to the QAOA that achieve higher performance with fewer circuit layers.
arXiv Detail & Related papers (2022-01-06T21:02:30Z) - Parameters Fixing Strategy for Quantum Approximate Optimization
Algorithm [0.0]
We propose a strategy to give high approximation ratio on average, even at large circuit depths, by initializing QAOA with the optimal parameters obtained from the previous depths.
We test our strategy on the Max-cut problem of certain classes of graphs such as the 3-regular graphs and the Erd"os-R'enyi graphs.
arXiv Detail & Related papers (2021-08-11T15:44:16Z) - Improving the Quantum Approximate Optimization Algorithm with
postselection [0.0]
Combinatorial optimization is among the main applications envisioned for near-term and fault-tolerant quantum computers.
We consider a well-studied quantum algorithm for optimization: the Quantum Approximate Optimization Algorithm (QAOA) applied to the MaxCut problem on 3-regular graphs.
We derive theoretical upper and lower bounds showing that a constant (though small) increase of the fraction of satisfied edges is indeed achievable.
arXiv Detail & Related papers (2020-11-10T22:17:50Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.