Fixed-point Grover Adaptive Search for Quadratic Binary Optimization Problems
- URL: http://arxiv.org/abs/2311.05592v5
- Date: Sat, 19 Oct 2024 13:19:12 GMT
- Title: Fixed-point Grover Adaptive Search for Quadratic Binary Optimization Problems
- Authors: Ákos Nagy, Jaime Park, Cindy Zhang, Atithi Acharya, Alex Khan,
- Abstract summary: We study a Grover-type method for Quadratic Unconstrained Binary Optimization (QUBO) problems.
We construct a marker oracle for such problems with a tuneable parameter, $Lambda in left[ 1, m right] cap mathbbZ$.
For all values of $Lambda$, the non-Clifford gate count of these oracles is strictly lower.
Some of our results are novel and useful for any method based on Fixed-point Search.
- Score: 0.6524460254566903
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study a Grover-type method for Quadratic Unconstrained Binary Optimization (QUBO) problems. For an $n$-dimensional QUBO problem with $m$ nonzero terms, we construct a marker oracle for such problems with a tuneable parameter, $\Lambda \in \left[ 1, m \right] \cap \mathbb{Z}$. At $d \in \mathbb{Z}_+$ precision, the oracle uses $O (n + \Lambda d)$ qubits, has total depth of $O \left( \tfrac{m}{\Lambda} \log_2 (n) + \log_2 (d) \right)$, and non-Clifford depth of $O \left( \tfrac{m}{\Lambda} \right)$. Moreover, each qubit required to be connected to at most $O \left( \log_2 (\Lambda + d) \right)$ other qubits. In the case of a maximum graph cuts, as $d = 2 \left\lceil \log_2 (n) \right\rceil$ always suffices, the depth of the marker oracle can be made as shallow as $O (\log_2 (n))$. For all values of $\Lambda$, the non-Clifford gate count of these oracles is strictly lower (at least by a factor of $\sim 2$) than previous constructions. Furthermore, we introduce a novel \textit{Fixed-point Grover Adaptive Search for QUBO Problems}, using our oracle design and a hybrid Fixed-point Grover Search, motivated by the works of Boyer et al. and Li et al. This method has better performance guarantees than previous Grover Adaptive Search methods. Some of our results are novel and useful for any method based on Fixed-point Grover Search. Finally, we give a heuristic argument that, with high probability and in $O \left( \tfrac{\log_2 (n)}{\sqrt{\epsilon}} \right)$ time, this adaptive method finds a configuration that is among the best $\epsilon 2^n$ ones.
Related papers
- Novel oracle constructions for quantum random access memory [0.6445605125467572]
We present new designs for quantum random access memory.
We construct oracles, $mathcalO_f$, with the property beginequation mathcalO_f left| x rightrangle_n left| 0 rightrangle_d = left| x rightrangle_n left| f(x) rightrangle_d.
arXiv Detail & Related papers (2024-05-30T16:30:16Z) - Gradient Descent is Pareto-Optimal in the Oracle Complexity and Memory Tradeoff for Feasibility Problems [0.0]
We show that to solve feasibility problems with accuracy any deterministic algorithm either uses $d1+delta$ bits of memory or must make at least $1/(d0.01delta epsilon2frac1-delta1+1.01 delta-o(1))$ oracle queries.
We also show that randomized algorithms either use $d1+delta$ memory or make at least $1/(d2delta epsilon2(1-4delta)-o(1))$ queries for any $deltain
arXiv Detail & Related papers (2024-04-10T04:15:50Z) - Fast $(1+\varepsilon)$-Approximation Algorithms for Binary Matrix
Factorization [54.29685789885059]
We introduce efficient $(1+varepsilon)$-approximation algorithms for the binary matrix factorization (BMF) problem.
The goal is to approximate $mathbfA$ as a product of low-rank factors.
Our techniques generalize to other common variants of the BMF problem.
arXiv Detail & Related papers (2023-06-02T18:55:27Z) - The First Optimal Acceleration of High-Order Methods in Smooth Convex
Optimization [88.91190483500932]
We study the fundamental open question of finding the optimal high-order algorithm for solving smooth convex minimization problems.
The reason for this is that these algorithms require performing a complex binary procedure, which makes them neither optimal nor practical.
We fix this fundamental issue by providing the first algorithm with $mathcalOleft(epsilon-2/(p+1)right) $pth order oracle complexity.
arXiv Detail & Related papers (2022-05-19T16:04:40Z) - Distributed Grover's algorithm [1.9551668880584971]
We propose a distributed Grover's algorithm for computing $f$ with lower query times and smaller number of input bits.
In particular, if $a=1$, then our distributed Grover's algorithm only needs $lfloor fracpi4sqrt2n-krceil+2t_a+1$ for some $1leq b_ileq a$ and $r_ileq 2t_a+1$.
arXiv Detail & Related papers (2022-04-22T04:05:34Z) - Finding Second-Order Stationary Point for Nonconvex-Strongly-Concave
Minimax Problem [16.689304539024036]
In this paper, we consider non-asymotic behavior of finding second-order stationary point for minimax problem.
For high-dimensional problems, we propose anf to expensive cost form second-order oracle which solves the cubic sub-problem in gradient via descent and Chebyshev expansion.
arXiv Detail & Related papers (2021-10-10T14:54:23Z) - Approximate Frank-Wolfe Algorithms over Graph-structured Support Sets [27.913569257545554]
Two popular approximation assumptions (textitadditive and textitmultiplicative gap errors) are not valid for our problem.
A new textitapproximate dual oracle (DMO) is proposed, which approximates the inner product rather than the gap.
Our empirical results suggest that even these improved bounds are pessimistic, with significant improvement in real-world images with graph-structured sparsity.
arXiv Detail & Related papers (2021-06-29T19:39:43Z) - Optimal Regret Algorithm for Pseudo-1d Bandit Convex Optimization [51.23789922123412]
We study online learning with bandit feedback (i.e. learner has access to only zeroth-order oracle) where cost/reward functions admit a "pseudo-1d" structure.
We show a lower bound of $min(sqrtdT, T3/4)$ for the regret of any algorithm, where $T$ is the number of rounds.
We propose a new algorithm sbcalg that combines randomized online gradient descent with a kernelized exponential weights method to exploit the pseudo-1d structure effectively.
arXiv Detail & Related papers (2021-02-15T08:16:51Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z) - Agnostic Q-learning with Function Approximation in Deterministic
Systems: Tight Bounds on Approximation Error and Sample Complexity [94.37110094442136]
We study the problem of agnostic $Q$-learning with function approximation in deterministic systems.
We show that if $delta = Oleft(rho/sqrtdim_Eright)$, then one can find the optimal policy using $Oleft(dim_Eright)$.
arXiv Detail & Related papers (2020-02-17T18:41:49Z) - Tight Quantum Lower Bound for Approximate Counting with Quantum States [49.6558487240078]
We prove tight lower bounds for the following variant of the counting problem considered by Aaronson, Kothari, Kretschmer, and Thaler ( 2020)
The task is to distinguish whether an input set $xsubseteq [n]$ has size either $k$ or $k'=(1+varepsilon)k$.
arXiv Detail & Related papers (2020-02-17T10:53:50Z) - On the Complexity of Minimizing Convex Finite Sums Without Using the
Indices of the Individual Functions [62.01594253618911]
We exploit the finite noise structure of finite sums to derive a matching $O(n2)$-upper bound under the global oracle model.
Following a similar approach, we propose a novel adaptation of SVRG which is both emphcompatible with oracles, and achieves complexity bounds of $tildeO(n2+nsqrtL/mu)log (1/epsilon)$ and $O(nsqrtL/epsilon)$, for $mu>0$ and $mu=0$
arXiv Detail & Related papers (2020-02-09T03:39:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.