Exponentially Better Bounds for Quantum Optimization via Dynamical Simulation
- URL: http://arxiv.org/abs/2502.04285v1
- Date: Thu, 06 Feb 2025 18:32:26 GMT
- Title: Exponentially Better Bounds for Quantum Optimization via Dynamical Simulation
- Authors: Ahmet Burak Catli, Sophia Simon, Nathan Wiebe,
- Abstract summary: We provide several quantum algorithms for continuous optimization that do not require any gradient estimation.
We encode the optimization problem into the dynamics of a physical system and coherently simulate the time evolution.
- Score: 0.5097809301149342
- License:
- Abstract: We provide several quantum algorithms for continuous optimization that do not require any gradient estimation. Instead, we encode the optimization problem into the dynamics of a physical system and coherently simulate the time evolution. This allows us, in certain cases, to obtain exponentially better query upper bounds relative to the best known upper bounds for gradient-based optimization schemes which utilize quantum computers only for the evaluation of gradients. Our first two algorithms can find local optima of a differentiable function $f: \mathbb{R}^N \rightarrow \mathbb{R}$ by simulating either classical or quantum dynamics with friction via a time-dependent Hamiltonian. We show that these methods require $O(N\kappa^2/h_x^2\epsilon)$ queries to a phase oracle to find an $\epsilon$-approximate local optimum of a locally quadratic objective function, where $\kappa$ is the condition number of the Hessian matrix and $h_x$ is the discretization spacing. In contrast, we show that gradient-based methods require $O(N(1/\epsilon)^{\kappa \log(3)/4})$ queries. Our third algorithm can find the global optimum of $f$ by preparing a classical low-temperature thermal state via simulation of the classical Liouvillian operator associated with the Nos\'e Hamiltonian. We use results from the quantum thermodynamics literature to bound the thermalization time for the discrete system. Additionally, we analyze barren plateau effects that commonly plague quantum optimization algorithms and observe that our approach is vastly less sensitive to this problem than standard gradient-based optimization. Our results suggests that these dynamical optimization approaches may be far more scalable for future quantum machine learning, optimization and variational experiments than was widely believed.
Related papers
- Near-Optimal Parameter Tuning of Level-1 QAOA for Ising Models [3.390330377512402]
We show how to reduce the two-dimensional $(gamma, beta)$ search to a one-dimensional search over $gamma$, with $beta*$ computed analytically.
This approach is validated using Recursive QAOA (RQAOA), where it consistently outperforms both coarsely optimised RQAOA and semidefinite programs.
arXiv Detail & Related papers (2025-01-27T19:00:00Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Quantum Lower Bounds for Finding Stationary Points of Nonconvex
Functions [8.495749905129278]
Despite progress in classical lower bounds for non optimization, these bounds are still widely open.
Omegabig(frac-1+ppp)$ regarding the first setting.
Omegano()$ regarding the second setting.
Omega()$ if gradient function is meand smooth.
Omega()$ if gradient function is meand smooth.
Omega()$ if gradient function is meand smooth.
Omega()$ if gradient function is meand smooth.
arXiv Detail & Related papers (2022-12-07T19:02:36Z) - Explicit Second-Order Min-Max Optimization Methods with Optimal Convergence Guarantee [86.05440220344755]
We propose and analyze inexact regularized Newton-type methods for finding a global saddle point of emphcon unconstrained min-max optimization problems.
We show that the proposed methods generate iterates that remain within a bounded set and that the iterations converge to an $epsilon$-saddle point within $O(epsilon-2/3)$ in terms of a restricted function.
arXiv Detail & Related papers (2022-10-23T21:24:37Z) - The Quantum Approximate Optimization Algorithm performance with low
entanglement and high circuit depth [0.0]
Variational quantum algorithms constitute one of the most widespread methods for using current noisy quantum computers.
We investigate entanglement's role in these methods for solving optimization problems.
We conclude that entanglement plays a minor role in the MaxCut and Exact Cover 3 problems studied here.
arXiv Detail & Related papers (2022-07-07T16:21:36Z) - Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with
Variance Reduction and its Application to Optimization [50.83356836818667]
gradient Langevin Dynamics is one of the most fundamental algorithms to solve non-eps optimization problems.
In this paper, we show two variants of this kind, namely the Variance Reduced Langevin Dynamics and the Recursive Gradient Langevin Dynamics.
arXiv Detail & Related papers (2022-03-30T11:39:00Z) - Progress towards analytically optimal angles in quantum approximate
optimisation [0.0]
The Quantum Approximate optimisation algorithm is a $p$ layer, time-variable split operator method executed on a quantum processor.
We prove that optimal parameters for $p=1$ layer reduce to one free variable and in the thermodynamic limit, we recover optimal angles.
We moreover demonstrate that conditions for vanishing gradients of the overlap function share a similar form which leads to a linear relation between circuit parameters, independent on the number of qubits.
arXiv Detail & Related papers (2021-09-23T18:00:13Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Approximate optimization of MAXCUT with a local spin algorithm [0.0]
Local tensor methods are a class of optimization algorithms introduced in [Hastings,arXiv:1905.07047v2].
We investigate performance in practice through benchmarking experiments on instances of the MAXCUT problem.
We find time to solution achieved by the local tensor method is highly uncorrelated with that achieved by a widely used commercial optimization package.
arXiv Detail & Related papers (2020-08-13T18:00:00Z) - Gradient Free Minimax Optimization: Variance Reduction and Faster
Convergence [120.9336529957224]
In this paper, we denote the non-strongly setting on the magnitude of a gradient-free minimax optimization problem.
We show that a novel zeroth-order variance reduced descent algorithm achieves the best known query complexity.
arXiv Detail & Related papers (2020-06-16T17:55:46Z) - Towards Better Understanding of Adaptive Gradient Algorithms in
Generative Adversarial Nets [71.05306664267832]
Adaptive algorithms perform gradient updates using the history of gradients and are ubiquitous in training deep neural networks.
In this paper we analyze a variant of OptimisticOA algorithm for nonconcave minmax problems.
Our experiments show that adaptive GAN non-adaptive gradient algorithms can be observed empirically.
arXiv Detail & Related papers (2019-12-26T22:10:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.