Quantum speedups for linear programming via interior point methods
- URL: http://arxiv.org/abs/2311.03215v2
- Date: Thu, 11 Apr 2024 14:00:04 GMT
- Title: Quantum speedups for linear programming via interior point methods
- Authors: Simon Apers, Sander Gribling,
- Abstract summary: We describe a quantum algorithm for solving a linear program with $n$ inequality constraints on $d$ variables.
Our algorithm speeds up the Newton step in the state-of-the-art interior point method of Lee and Sidford.
- Score: 1.8434042562191815
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We describe a quantum algorithm based on an interior point method for solving a linear program with $n$ inequality constraints on $d$ variables. The algorithm explicitly returns a feasible solution that is $\varepsilon$-close to optimal, and runs in time $\sqrt{n} \cdot \mathrm{poly}(d,\log(n),\log(1/\varepsilon))$ which is sublinear for tall linear programs (i.e., $n \gg d$). Our algorithm speeds up the Newton step in the state-of-the-art interior point method of Lee and Sidford [FOCS~'14]. This requires us to efficiently approximate the Hessian and gradient of the barrier function, and these are our main contributions. To approximate the Hessian, we describe a quantum algorithm for the \emph{spectral approximation} of $A^T A$ for a tall matrix $A \in \mathbb R^{n \times d}$. The algorithm uses leverage score sampling in combination with Grover search, and returns a $\delta$-approximation by making $O(\sqrt{nd}/\delta)$ row queries to $A$. This generalizes an earlier quantum speedup for graph sparsification by Apers and de Wolf~[FOCS~'20]. To approximate the gradient, we use a recent quantum algorithm for multivariate mean estimation by Cornelissen, Hamoudi and Jerbi [STOC '22]. While a naive implementation introduces a dependence on the condition number of the Hessian, we avoid this by pre-conditioning our random variable using our quantum algorithm for spectral approximation.
Related papers
- Quantum spectral method for gradient and Hessian estimation [4.193480001271463]
Gradient descent is one of the most basic algorithms for solving continuous optimization problems.
We propose a quantum algorithm that returns an $varepsilon$-approximation of its gradient with query complexity $widetildeO (1/varepsilon)$.
We also propose two quantum algorithms for Hessian estimation, aiming to improve quantum analogs of Newton's method.
arXiv Detail & Related papers (2024-07-04T11:03:48Z) - Faster Linear Systems and Matrix Norm Approximation via Multi-level Sketched Preconditioning [10.690769339903941]
We present a new class of preconditioned iterative methods for solving linear systems of the form $Ax = b$.
Our methods are based on constructing a low-rank Nystr"om approximation to $A$ using sparse random sketching.
We prove that the convergence of our methods depends on a natural average condition number of $A$, which improves as the rank of the Nystr"om approximation.
arXiv Detail & Related papers (2024-05-09T15:53:43Z) - Do you know what q-means? [50.045011844765185]
Clustering is one of the most important tools for analysis of large datasets.
We present an improved version of the "$q$-means" algorithm for clustering.
We also present a "dequantized" algorithm for $varepsilon which runs in $Obig(frack2varepsilon2(sqrtkd + log(Nd))big.
arXiv Detail & Related papers (2023-08-18T17:52:12Z) - A Quantum Approximation Scheme for k-Means [0.16317061277457]
We give a quantum approximation scheme for the classical $k$-means clustering problem in the QRAM model.
Our quantum algorithm runs in time $tildeO left( 2tildeO(frackvarepsilon) eta2 dright)$.
Unlike previous works on unsupervised learning, our quantum algorithm does not require quantum linear algebra subroutines.
arXiv Detail & Related papers (2023-08-16T06:46:37Z) - Mind the gap: Achieving a super-Grover quantum speedup by jumping to the
end [114.3957763744719]
We present a quantum algorithm that has rigorous runtime guarantees for several families of binary optimization problems.
We show that the algorithm finds the optimal solution in time $O*(2(0.5-c)n)$ for an $n$-independent constant $c$.
We also show that for a large fraction of random instances from the $k$-spin model and for any fully satisfiable or slightly frustrated $k$-CSP formula, statement (a) is the case.
arXiv Detail & Related papers (2022-12-03T02:45:23Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - Sketching Algorithms and Lower Bounds for Ridge Regression [65.0720777731368]
We give a sketching-based iterative algorithm that computes $1+varepsilon$ approximate solutions for the ridge regression problem.
We also show that this algorithm can be used to give faster algorithms for kernel ridge regression.
arXiv Detail & Related papers (2022-04-13T22:18:47Z) - Quantum speedups for convex dynamic programming [6.643082745560234]
We present a quantum algorithm to solve dynamic programming problems with convex value functions.
The proposed algorithm outputs a quantum-mechanical representation of the value function in time $O(T gammadTmathrmpolylog(N,(T/varepsilon)d))$.
arXiv Detail & Related papers (2020-11-23T19:00:11Z) - Quantum algorithms for spectral sums [50.045011844765185]
We propose new quantum algorithms for estimating spectral sums of positive semi-definite (PSD) matrices.
We show how the algorithms and techniques used in this work can be applied to three problems in spectral graph theory.
arXiv Detail & Related papers (2020-11-12T16:29:45Z) - An improved quantum-inspired algorithm for linear regression [15.090593955414137]
We give a classical algorithm for linear regression analogous to the quantum matrix inversion algorithm.
We show that quantum computers can achieve at most a factor-of-12 speedup for linear regression in this QRAM data structure setting.
arXiv Detail & Related papers (2020-09-15T17:58:25Z) - Maximizing Determinants under Matroid Constraints [69.25768526213689]
We study the problem of finding a basis $S$ of $M$ such that $det(sum_i in Sv_i v_i v_itop)$ is maximized.
This problem appears in a diverse set of areas such as experimental design, fair allocation of goods, network design, and machine learning.
arXiv Detail & Related papers (2020-04-16T19:16:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.