Fast Convex Optimization with Quantum Gradient Methods
- URL: http://arxiv.org/abs/2503.17356v1
- Date: Fri, 21 Mar 2025 17:58:12 GMT
- Title: Fast Convex Optimization with Quantum Gradient Methods
- Authors: Brandon Augustino, Dylan Herman, Enrico Fontana, Junhyung Lyle Kim, Jacob Watkins, Shouvanik Chakrabarti, Marco Pistoia,
- Abstract summary: We study quantum algorithms based on quantum (sub)gradient estimation using noisy evaluation oracles.<n>We match the first-order query complexities of classical gradient descent, using only noisy evaluation oracles.<n>We generalize these algorithms to work in non-Euclidean settings.
- Score: 2.5094874597551913
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study quantum algorithms based on quantum (sub)gradient estimation using noisy evaluation oracles, and demonstrate the first dimension independent query complexities (up to poly-logarithmic factors) for zeroth-order convex optimization in both smooth and nonsmooth settings. We match the first-order query complexities of classical gradient descent, using only noisy evaluation oracles, thereby exhibiting exponential separation between quantum and classical zeroth-order optimization. Specifically, for the smooth case, zeroth-order quantum gradient descent achieves $\widetilde{\mathcal{O}}(LR^2/\varepsilon)$ and $\widetilde{\mathcal{O}} \left( \kappa \log(1/\varepsilon \right))$ query complexities, for the convex and strongly convex case respectively; for the nonsmooth case, the zeroth-order quantum subgradient method attains a query complexity of $\widetilde{\mathcal{O}}((GR/\varepsilon)^2 )$. We then generalize these algorithms to work in non-Euclidean settings by using quantum (sub)gradient estimation to instantiate mirror descent, dual averaging and mirror prox. We demonstrate how our algorithm for nonsmooth optimization can be applied to solve an SDP involving $m$ constraints and $n \times n$ matrices to additive error $\varepsilon$ using $\widetilde{\mathcal{O}} ((mn^2+n^{\omega})(Rr/\varepsilon)^2)$ gates, where $\omega \in [2,2.38)$ is the exponent of matrix-multiplication time and $R$ and $r$ are bounds on the primal and dual optimal solutions, respectively. Specializing to linear programs, we obtain a quantum LP solver with complexity $ \widetilde{\mathcal{O}}((m+\sqrt{n}) (Rr/\varepsilon)^2).$ For zero-sum games we achieve the best quantum runtimes for any $\varepsilon > 0$ when $m = \mathcal{O}(\sqrt{n})$. We obtain the best algorithm overall (quantum or classical) whenever we further impose $\varepsilon=\Omega((m+n)^{-1/4})$.
Related papers
- Quantum Algorithms for Non-smooth Non-convex Optimization [30.576546266390714]
This paper considers the problem for finding the $(,epsilon)$-Goldstein stationary point of Lipschitz continuous objective.
We construct a zeroth-order quantum estimator for the surrogate oracle function.
arXiv Detail & Related papers (2024-10-21T16:52:26Z) - Quantum spectral method for gradient and Hessian estimation [4.193480001271463]
Gradient descent is one of the most basic algorithms for solving continuous optimization problems.
We propose a quantum algorithm that returns an $varepsilon$-approximation of its gradient with query complexity $widetildeO (1/varepsilon)$.
We also propose two quantum algorithms for Hessian estimation, aiming to improve quantum analogs of Newton's method.
arXiv Detail & Related papers (2024-07-04T11:03:48Z) - Near-Optimal Quantum Algorithm for Minimizing the Maximal Loss [16.91814406135565]
We conduct a systematic study for quantum algorithms and lower bounds for minimizing the maximum of $N$ convex, Lipschitz functions.
We prove that quantum algorithms must take $tildeOmega(sqrtNepsilon-2/3)$ queries to a first order quantum oracle.
arXiv Detail & Related papers (2024-02-20T06:23:36Z) - Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - An Oblivious Stochastic Composite Optimization Algorithm for Eigenvalue
Optimization Problems [76.2042837251496]
We introduce two oblivious mirror descent algorithms based on a complementary composite setting.
Remarkably, both algorithms work without prior knowledge of the Lipschitz constant or smoothness of the objective function.
We show how to extend our framework to scale and demonstrate the efficiency and robustness of our methods on large scale semidefinite programs.
arXiv Detail & Related papers (2023-06-30T08:34:29Z) - A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization [53.044526424637866]
In this paper we consider finding an approximate second-order stationary point (SOSP) that minimizes a twice different subject general non conic optimization.
In particular, we propose a Newton-CG based-augmentedconjugate method for finding an approximate SOSP.
arXiv Detail & Related papers (2023-01-10T20:43:29Z) - ReSQueing Parallel and Private Stochastic Convex Optimization [59.53297063174519]
We introduce a new tool for BFG convex optimization (SCO): a Reweighted Query (ReSQue) estimator for the gradient of a function convolved with a (Gaussian) probability density.
We develop algorithms achieving state-of-the-art complexities for SCO in parallel and private settings.
arXiv Detail & Related papers (2023-01-01T18:51:29Z) - Robustness of Quantum Algorithms for Nonconvex Optimization [7.191453718557392]
We show that quantum algorithms can find an $epsilon$-SOSP with poly-logarithmic,, or exponential number of queries in $d.
We also characterize the domains where quantum algorithms can find an $epsilon$-SOSP with poly-logarithmic,, or exponential number of queries in $d.
arXiv Detail & Related papers (2022-12-05T19:10:32Z) - Mind the gap: Achieving a super-Grover quantum speedup by jumping to the
end [114.3957763744719]
We present a quantum algorithm that has rigorous runtime guarantees for several families of binary optimization problems.
We show that the algorithm finds the optimal solution in time $O*(2(0.5-c)n)$ for an $n$-independent constant $c$.
We also show that for a large fraction of random instances from the $k$-spin model and for any fully satisfiable or slightly frustrated $k$-CSP formula, statement (a) is the case.
arXiv Detail & Related papers (2022-12-03T02:45:23Z) - No quantum speedup over gradient descent for non-smooth convex
optimization [22.16973542453584]
Black-box access to a (not necessarily smooth) function $f:mathbbRn to mathbbR$ and its (sub)gradient.
Our goal is to find an $epsilon$-approximate minimum of $f$ starting from a point that is distance at most $R$ from the true minimum.
We show that although the function family used in the lower bound is hard for randomized algorithms, it can be solved using $O(GR/epsilon)$ quantum queries.
arXiv Detail & Related papers (2020-10-05T06:32:47Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.