Depth Optimized Ansatz Circuit in QAOA for Max-Cut
- URL: http://arxiv.org/abs/2110.04637v1
- Date: Sat, 9 Oct 2021 19:45:12 GMT
- Title: Depth Optimized Ansatz Circuit in QAOA for Max-Cut
- Authors: Ritajit Majumdar, Debasmita Bhoumik, Dhiraj Madan, Dhinakaran
Vinayagamurthy, Shesha Raghunathan, Susmita Sur-Kolay
- Abstract summary: We propose an $O(Delta cdot n2)$ greedy algorithm, where $Delta$ is the maximum degree of the graph, that finds a spanning tree of lower height.
We numerically show that this algorithm achieves nearly 10 times increase in the probability of success for each iteration of QAOA for Max-Cut.
- Score: 0.9670182163018803
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While a Quantum Approximate Optimization Algorithm (QAOA) is intended to
provide a quantum advantage in finding approximate solutions to combinatorial
optimization problems, noise in the system is a hurdle in exploiting its full
potential. Several error mitigation techniques have been studied to lessen the
effect of noise on this algorithm. Recently, Majumdar et al. proposed a Depth
First Search (DFS) based method to reduce $n-1$ CNOT gates in the ansatz design
of QAOA for finding Max-Cut in a graph G = (V, E), |V| = n. However, this
method tends to increase the depth of the circuit, making it more prone to
relaxation error. The depth of the circuit is proportional to the height of the
DFS tree, which can be $n-1$ in the worst case. In this paper, we propose an
$O(\Delta \cdot n^2)$ greedy heuristic algorithm, where $\Delta$ is the maximum
degree of the graph, that finds a spanning tree of lower height, thus reducing
the overall depth of the circuit while still retaining the $n-1$ reduction in
the number of CNOT gates needed in the ansatz. We numerically show that this
algorithm achieves nearly 10 times increase in the probability of success for
each iteration of QAOA for Max-Cut. We further show that although the average
depth of the circuit produced by this heuristic algorithm still grows linearly
with n, our algorithm reduces the slope of the linear increase from 1 to 0.11.
Related papers
- A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - A quantum central path algorithm for linear optimization [5.450016817940232]
We propose a novel quantum algorithm for solving linear optimization problems by quantum-mechanical simulation of the central path.
This approach yields an algorithm for solving linear optimization problems involving $m$ constraints and $n$ variables to $varepsilon$-optimality.
In the standard gate model (i.e., without access to quantum RAM), our algorithm can obtain highly-precise solutions to LO problems using at most $$mathcalO left( sqrtm + n textsfnnz (A) fracR_1
arXiv Detail & Related papers (2023-11-07T13:26:20Z) - Automatic Depth-Optimized Quantum Circuit Synthesis for Diagonal Unitary
Matrices with Asymptotically Optimal Gate Count [9.194399933498323]
It is of great importance to optimize the depth/gate-count when designing quantum circuits for specific tasks.
In this paper, we propose a depth-optimized synthesis algorithm that automatically produces a quantum circuit for any given diagonal unitary matrix.
arXiv Detail & Related papers (2022-12-02T06:58:26Z) - Even shorter quantum circuit for phase estimation on early
fault-tolerant quantum computers with applications to ground-state energy
estimation [5.746732081406236]
We develop a phase estimation method with a distinct feature.
The total cost of the algorithm satisfies the Heisenberg-limited scaling $widetildemathcalO(epsilon-1)$.
Our algorithm may significantly reduce the circuit depth for performing phase estimation tasks on early fault-tolerant quantum computers.
arXiv Detail & Related papers (2022-11-22T03:15:40Z) - An entanglement perspective on the quantum approximate optimization
algorithm [0.0]
We study the entanglement growth and spread resulting from randomized and optimized QAOA circuits.
We also investigate the entanglement spectrum in connection with random matrix theory.
arXiv Detail & Related papers (2022-06-14T17:37:44Z) - Twisted hybrid algorithms for combinatorial optimization [68.8204255655161]
Proposed hybrid algorithms encode a cost function into a problem Hamiltonian and optimize its energy by varying over a set of states with low circuit complexity.
We show that for levels $p=2,ldots, 6$, the level $p$ can be reduced by one while roughly maintaining the expected approximation ratio.
arXiv Detail & Related papers (2022-03-01T19:47:16Z) - Optimizing Ansatz Design in QAOA for Max-cut [0.9126120966154119]
CNOT is one of the primary sources of error in modern quantum computers.
In this paper, we propose two hardware independent methods to reduce the number of CNOT gates in the circuit.
arXiv Detail & Related papers (2021-06-05T06:43:48Z) - On the Almost Sure Convergence of Stochastic Gradient Descent in
Non-Convex Problems [75.58134963501094]
This paper analyzes the trajectories of gradient descent (SGD)
We show that SGD avoids saddle points/manifolds with $1$ for strict step-size policies.
arXiv Detail & Related papers (2020-06-19T14:11:26Z) - Exploiting Higher Order Smoothness in Derivative-free Optimization and
Continuous Bandits [99.70167985955352]
We study the problem of zero-order optimization of a strongly convex function.
We consider a randomized approximation of the projected gradient descent algorithm.
Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters.
arXiv Detail & Related papers (2020-06-14T10:42:23Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z) - Differentially Quantized Gradient Methods [53.3186247068836]
We show that Differentially Quantized Gradient Descent (DQ-GD) attains a linear contraction factor of $maxsigma_mathrmGD, rhon 2-R$.
No algorithm within a certain class can converge faster than $maxsigma_mathrmGD, 2-R$.
arXiv Detail & Related papers (2020-02-06T20:40:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.