Improving quantum linear system solvers via a gradient descent
perspective
- URL: http://arxiv.org/abs/2109.04248v1
- Date: Thu, 9 Sep 2021 13:16:28 GMT
- Title: Improving quantum linear system solvers via a gradient descent
perspective
- Authors: Sander Gribling, Iordanis Kerenidis, D\'aniel Szil\'agyi
- Abstract summary: We revisit quantum linear system solvers from the perspective of convex optimization.
This leads to a considerable constant-factor iteration in the runtime.
We show how the optimal quantum linear system solver of Childs, Kothari, and Somma is related to the gradient descent algorithm.
- Score: 3.0969191504482247
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Solving systems of linear equations is one of the most important primitives
in quantum computing that has the potential to provide a practical quantum
advantage in many different areas, including in optimization, simulation, and
machine learning. In this work, we revisit quantum linear system solvers from
the perspective of convex optimization, and in particular gradient descent-type
algorithms. This leads to a considerable constant-factor improvement in the
runtime (or, conversely, a several orders of magnitude smaller error with the
same runtime/circuit depth).
More precisely, we first show how the asymptotically optimal quantum linear
system solver of Childs, Kothari, and Somma is related to the gradient descent
algorithm on the convex function $\|A\vec x - \vec b\|_2^2$: their linear
system solver is based on a truncation in the Chebyshev basis of the
degree-$(t-1)$ polynomial (in $A$) that maps the initial solution $\vec{x}_1 :=
\vec{b}$ to the $t$-th iterate $\vec{x}_t$ in the basic gradient descent
algorithm. Then, instead of starting from the basic gradient descent algorithm,
we use the optimal Chebyshev iteration method (which can be viewed as an
accelerated gradient descent algorithm) and show that this leads to
considerable improvements in the quantum solver.
Related papers
- A Catalyst Framework for the Quantum Linear System Problem via the Proximal Point Algorithm [9.804179673817574]
We propose a new quantum algorithm for the quantum linear system problem (QLSP) inspired by the classical proximal point algorithm (PPA)
Our proposed method can be viewed as a meta-algorithm that allows inverting a modified matrix via an existing texttimattQLSP_solver.
By carefully choosing the step size $eta$, the proposed algorithm can effectively precondition the linear system to mitigate the dependence on condition numbers that hindered the applicability of previous approaches.
arXiv Detail & Related papers (2024-06-19T23:15:35Z) - A quantum central path algorithm for linear optimization [5.450016817940232]
We propose a novel quantum algorithm for solving linear optimization problems by quantum-mechanical simulation of the central path.
This approach yields an algorithm for solving linear optimization problems involving $m$ constraints and $n$ variables to $varepsilon$-optimality.
In the standard gate model (i.e., without access to quantum RAM), our algorithm can obtain highly-precise solutions to LO problems using at most $$mathcalO left( sqrtm + n textsfnnz (A) fracR_1
arXiv Detail & Related papers (2023-11-07T13:26:20Z) - GRAPE optimization for open quantum systems with time-dependent
decoherence rates driven by coherent and incoherent controls [77.34726150561087]
The GRadient Ascent Pulse Engineering (GRAPE) method is widely used for optimization in quantum control.
We adopt GRAPE method for optimizing objective functionals for open quantum systems driven by both coherent and incoherent controls.
The efficiency of the algorithm is demonstrated through numerical simulations for the state-to-state transition problem.
arXiv Detail & Related papers (2023-07-17T13:37:18Z) - Efficient quantum linear solver algorithm with detailed running costs [0.0]
We introduce a quantum linear solver algorithm combining ideasdiabatic quantum computing with filtering techniques based on quantum signal processing.
Our protocol reduces the cost of quantum linear solvers over state-of-the-art close to an order of magnitude for early implementations.
arXiv Detail & Related papers (2023-05-19T00:07:32Z) - Pure Quantum Gradient Descent Algorithm and Full Quantum Variational
Eigensolver [0.7149735232319818]
gradient-based gradient descent algorithm is a widely adopted optimization method.
We propose a novel quantum-based gradient calculation method that requires only a single oracle calculation.
We successfully implemented the quantum gradient descent algorithm and applied it to the Variational Quantum Eigensolver (VQE)
arXiv Detail & Related papers (2023-05-07T05:52:41Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Quantum Algorithm for Solving a Quadratic Nonlinear System of Equations [0.22940141855172036]
The complexity of our algorithm is $O(rm polylog(n/epsilon))$, which provides an exponential improvement over the optimal classical algorithm in dimension $n$.
Our algorithm exponentially accelerates the solution of QNSE and has wide applications in all kinds of nonlinear problems.
arXiv Detail & Related papers (2021-12-03T00:27:16Z) - Quadratic Unconstrained Binary Optimisation via Quantum-Inspired
Annealing [58.720142291102135]
We present a classical algorithm to find approximate solutions to instances of quadratic unconstrained binary optimisation.
We benchmark our approach for large scale problem instances with tuneable hardness and planted solutions.
arXiv Detail & Related papers (2021-08-18T09:26:17Z) - Gradient Free Minimax Optimization: Variance Reduction and Faster
Convergence [120.9336529957224]
In this paper, we denote the non-strongly setting on the magnitude of a gradient-free minimax optimization problem.
We show that a novel zeroth-order variance reduced descent algorithm achieves the best known query complexity.
arXiv Detail & Related papers (2020-06-16T17:55:46Z) - Complexity of Finding Stationary Points of Nonsmooth Nonconvex Functions [84.49087114959872]
We provide the first non-asymptotic analysis for finding stationary points of nonsmooth, nonsmooth functions.
In particular, we study Hadamard semi-differentiable functions, perhaps the largest class of nonsmooth functions.
arXiv Detail & Related papers (2020-02-10T23:23:04Z) - Towards Better Understanding of Adaptive Gradient Algorithms in
Generative Adversarial Nets [71.05306664267832]
Adaptive algorithms perform gradient updates using the history of gradients and are ubiquitous in training deep neural networks.
In this paper we analyze a variant of OptimisticOA algorithm for nonconcave minmax problems.
Our experiments show that adaptive GAN non-adaptive gradient algorithms can be observed empirically.
arXiv Detail & Related papers (2019-12-26T22:10:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.