Optimizing Ansatz Design in QAOA for Max-cut
- URL: http://arxiv.org/abs/2106.02812v4
- Date: Mon, 28 Jun 2021 11:59:59 GMT
- Title: Optimizing Ansatz Design in QAOA for Max-cut
- Authors: Ritajit Majumdar, Dhiraj Madan, Debasmita Bhoumik, Dhinakaran
Vinayagamurthy, Shesha Raghunathan, and Susmita Sur-Kolay
- Abstract summary: CNOT is one of the primary sources of error in modern quantum computers.
In this paper, we propose two hardware independent methods to reduce the number of CNOT gates in the circuit.
- Score: 0.9126120966154119
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantum Approximate Optimization Algorithm (QAOA) is studied primarily to
find approximate solutions to combinatorial optimization problems. For a graph
with $n$ vertices and $m$ edges, a depth $p$ QAOA for the Max-cut problem
requires $2\cdot m \cdot p$ CNOT gates. CNOT is one of the primary sources of
error in modern quantum computers. In this paper, we propose two hardware
independent methods to reduce the number of CNOT gates in the circuit. First,
we present a method based on Edge Coloring of the input graph that minimizes
the the number of cycles (termed as depth of the circuit), and reduces upto
$\lfloor \frac{n}{2} \rfloor$ CNOT gates. Next, we depict another method based
on Depth First Search (DFS) on the input graph that reduces $n-1$ CNOT gates,
but increases depth of the circuit moderately. We analytically derive the
condition for which the reduction in CNOT gates overshadows this increase in
depth, and the error probability of the circuit is still lowered. We show that
all IBM Quantum Hardware satisfy this condition. We simulate these two methods
for graphs of various sparsity with the \textit{ibmq\_manhattan} noise model,
and show that the DFS based method outperforms the edge coloring based method,
which in turn, outperforms the traditional QAOA circuit in terms of reduction
in the number of CNOT gates, and hence the probability of error of the circuit.
Related papers
- Reducing QAOA Circuit Depth by Factoring out Semi-Symmetries [4.958204128486634]
We show that our modified QUBO matrix $Q_Hamilton$ describes the same energy spectrum as the original $Q$.
Our algorithm achieved reductions in the number of couplings by up to $49%$ and in circuit depth by up to $41%$.
arXiv Detail & Related papers (2024-11-13T18:04:01Z) - Phantom Edges in the Problem Hamiltonian: A Method for Increasing Performance and Graph Visibility for QAOA [0.0]
We present a new QAOA ansatz that introduces only one additional parameter to the standard ansatz.
We derive a general formula for our new ansatz at $p=1$ and analytically show an improvement in the approximation ratio for cycle graphs.
arXiv Detail & Related papers (2024-11-07T22:20:01Z) - Promise of Graph Sparsification and Decomposition for Noise Reduction in QAOA: Analysis for Trapped-Ion Compilations [5.451583832235867]
We develop new approximate compilation schemes to solve the Max-Cut problem.
Results are based on principles of graph sparsification and decomposition.
We demonstrate significant reductions in noise are obtained in our new compilation approaches.
arXiv Detail & Related papers (2024-06-20T14:00:09Z) - Matching the Statistical Query Lower Bound for $k$-Sparse Parity Problems with Sign Stochastic Gradient Descent [83.85536329832722]
We solve the $k$-sparse parity problem with sign gradient descent (SGD) on two-layer fully-connected neural networks.
We show that this approach can efficiently solve the $k$-sparse parity problem on a $d$-dimensional hypercube.
We then demonstrate how a trained neural network with sign SGD can effectively approximate this good network, solving the $k$-parity problem with small statistical errors.
arXiv Detail & Related papers (2024-04-18T17:57:53Z) - AltGraph: Redesigning Quantum Circuits Using Generative Graph Models for Efficient Optimization [2.089191490381739]
AltGraph is a novel search-based circuit transformation approach.
It generates equivalent quantum circuits using existing generative graph models.
It achieves on average a 37.55% reduction in the number of gates and a 37.75% reduction in the circuit depth post-transpiling.
arXiv Detail & Related papers (2024-02-23T19:01:47Z) - Graph Neural Network Bandits [89.31889875864599]
We consider the bandit optimization problem with the reward function defined over graph-structured data.
Key challenges in this setting are scaling to large domains, and to graphs with many nodes.
We show that graph neural networks (GNNs) can be used to estimate the reward function.
arXiv Detail & Related papers (2022-07-13T18:12:36Z) - Computationally Efficient Horizon-Free Reinforcement Learning for Linear
Mixture MDPs [111.75736569611159]
We propose the first computationally efficient horizon-free algorithm for linear mixture MDPs.
Our algorithm adapts a weighted least square estimator for the unknown transitional dynamic.
This also improves upon the best-known algorithms in this setting when $sigma_k2$'s are known.
arXiv Detail & Related papers (2022-05-23T17:59:18Z) - Depth Optimized Ansatz Circuit in QAOA for Max-Cut [0.9670182163018803]
We propose an $O(Delta cdot n2)$ greedy algorithm, where $Delta$ is the maximum degree of the graph, that finds a spanning tree of lower height.
We numerically show that this algorithm achieves nearly 10 times increase in the probability of success for each iteration of QAOA for Max-Cut.
arXiv Detail & Related papers (2021-10-09T19:45:12Z) - Graph Signal Restoration Using Nested Deep Algorithm Unrolling [85.53158261016331]
Graph signal processing is a ubiquitous task in many applications such as sensor, social transportation brain networks, point cloud processing, and graph networks.
We propose two restoration methods based on convexindependent deep ADMM (ADMM)
parameters in the proposed restoration methods are trainable in an end-to-end manner.
arXiv Detail & Related papers (2021-06-30T08:57:01Z) - Gradient Free Minimax Optimization: Variance Reduction and Faster
Convergence [120.9336529957224]
In this paper, we denote the non-strongly setting on the magnitude of a gradient-free minimax optimization problem.
We show that a novel zeroth-order variance reduced descent algorithm achieves the best known query complexity.
arXiv Detail & Related papers (2020-06-16T17:55:46Z) - Differentially Quantized Gradient Methods [53.3186247068836]
We show that Differentially Quantized Gradient Descent (DQ-GD) attains a linear contraction factor of $maxsigma_mathrmGD, rhon 2-R$.
No algorithm within a certain class can converge faster than $maxsigma_mathrmGD, 2-R$.
arXiv Detail & Related papers (2020-02-06T20:40:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.