The role of quantum and classical correlations in shrinking algorithms for optimization
- URL: http://arxiv.org/abs/2404.17242v1
- Date: Fri, 26 Apr 2024 08:29:04 GMT
- Title: The role of quantum and classical correlations in shrinking algorithms for optimization
- Authors: Victor Fischer, Maximilian Passek, Friedrich Wagner, Jernej Rudi Finžgar, Lilly Palackal, Christian B. Mendl,
- Abstract summary: We study the performance of a shrinking algorithm for optimization problems (COPs)
We compare the performance of the algorithm equipped with correlations from the quantum approximate optimization algorithm (QAOA) as well as the classical linear programming (LP) and semi-definite programming (SDP) relaxations.
Our results indicate that LP outperforms all other approaches for low-density instances, while SDP excels for high-density problems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The benefit of quantum computing for solving combinatorial optimization problems (COPs) constitutes an open research question. In this work, we study the performance of a shrinking algorithm for COPs. The algorithm leverages correlations extracted from quantum or classical subroutines to recursively simplify the problem. We compare the performance of the algorithm equipped with correlations from the quantum approximate optimization algorithm (QAOA) as well as the classical linear programming (LP) and semi-definite programming (SDP) relaxations. This allows us to benchmark the utility of QAOA correlations against established classical relaxation algorithms. We apply the recursive algorithm to MaxCut problem instances with up to a hundred vertices at different graph densities. Our results indicate that LP outperforms all other approaches for low-density instances, while SDP excels for high-density problems. Moreover, the shrinking algorithm proves to be a viable alternative to established methods of rounding LP and SDP relaxations. In addition, the recursive shrinking algorithm outperforms its bare counterparts for all three types of correlations, i.e., LP with spanning tree rounding, the Goemans-Williamson algorithm, and conventional QAOA. While the lowest depth QAOA consistently yields worse results than the SDP, our tensor network experiments show that the performance increases significantly for deeper QAOA circuits.
Related papers
- Performance of Parity QAOA for the Signed Max-Cut Problem [0.0]
We investigate the performance of the Quantum Approximate Algorithm Optimization on the Parity architecture (Parity QAOA)
By comparing the algorithms at fixed circuit depth, we demonstrate that Parity QAOA outperforms conventional QAOA implementations based on SWAP networks.
arXiv Detail & Related papers (2024-09-23T08:00:03Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Extending relax-and-round combinatorial optimization solvers with
quantum correlations [0.0]
We introduce a relax-and-round approach embedding the quantum approximate optimization algorithm (QAOA) with $pgeq 1$ layers.
We show for many problems, including Sherrington-Kirk glasses, that at $p=1$, it is as accurate as its classical counterpart.
We pave the way for an overarching quantum relax-and-round framework with performance on par with some of the best classical algorithms.
arXiv Detail & Related papers (2023-07-11T22:02:01Z) - COPS: Controlled Pruning Before Training Starts [68.8204255655161]
State-of-the-art deep neural network (DNN) pruning techniques, applied one-shot before training starts, evaluate sparse architectures with the help of a single criterion -- called pruning score.
In this work we do not concentrate on a single pruning criterion, but provide a framework for combining arbitrary GSSs to create more powerful pruning strategies.
arXiv Detail & Related papers (2021-07-27T08:48:01Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Benchmark Study of Quantum Algorithms for Combinatorial Optimization:
Unitary versus Dissipative [0.0]
We study the performance scaling of three quantum algorithms for optimization.
MFB-CIM, discrete adiabatic quantum computation (DAQC), and the D"urr-Hoyer algorithm for quantum minimum finding (DH-QMF)
arXiv Detail & Related papers (2021-05-07T22:35:02Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z) - A Heuristic Based on Randomized Greedy Algorithms for the Clustered
Shortest-Path Tree Problem [2.099922236065961]
This paper introduces a new algorithm that combines the major features of RGAs and Shortest Path Tree Algorithm (SPTA) to deal with the Clustered Shortest-Path Tree Problem (CluSPT)
To evaluate the performance of the proposed algorithm, Euclidean benchmarks are selected.
arXiv Detail & Related papers (2020-05-05T08:34:58Z) - Algorithms for Tensor Network Contraction Ordering [0.0]
Well-designed contraction sequences can dramatically reduce the contraction cost.
We benchmark two common discrete optimization techniques to this ordering problem.
We find that the algorithms we consider consistently outperform a greedy search given equal computational resources.
arXiv Detail & Related papers (2020-01-15T19:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.