A quantum central path algorithm for linear optimization
- URL: http://arxiv.org/abs/2311.03977v1
- Date: Tue, 7 Nov 2023 13:26:20 GMT
- Title: A quantum central path algorithm for linear optimization
- Authors: Brandon Augustino, Jiaqi Leng, Giacomo Nannicini, Tam\'as Terlaky and
Xiaodi Wu
- Abstract summary: We propose a novel quantum algorithm for solving linear optimization problems by quantum-mechanical simulation of the central path.
We obtain an exact solution to a linear optimization problem involving $m$ constraints and $n$ variables using at most $mathcalO left( (m + n) textnnz (A) kappa (mathcalM) L cdot textpolylog left(m, n, kappa)$ elementary gates and $mathcalO left(
- Score: 5.774924046750588
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel quantum algorithm for solving linear optimization problems
by quantum-mechanical simulation of the central path. While interior point
methods follow the central path with an iterative algorithm that works with
successive linearizations of the perturbed KKT conditions, we perform a single
simulation working directly with the nonlinear complementarity equations.
Combining our approach with iterative refinement techniques, we obtain an exact
solution to a linear optimization problem involving $m$ constraints and $n$
variables using at most $\mathcal{O} \left( (m + n) \text{nnz} (A) \kappa
(\mathcal{M}) L \cdot \text{polylog} \left(m, n, \kappa (\mathcal{M}) \right)
\right)$ elementary gates and $\mathcal{O} \left( \text{nnz} (A) L \right)$
classical arithmetic operations, where $ \text{nnz} (A)$ is the total number of
non-zero elements found in the constraint matrix, $L$ denotes binary input
length of the problem data, and $\kappa (\mathcal{M})$ is a condition number
that depends only on the problem data.
Related papers
- Fine-grained Analysis and Faster Algorithms for Iteratively Solving Linear Systems [9.30306458153248]
We consider the spectral tail condition number, $kappa_ell$, defined as the ratio between the $ell$th largest and the smallest singular value of the matrix representing the system.
Some of the implications of our result, and of the use of $kappa_ell$, include direct improvement over a fine-grained analysis of the Conjugate method.
arXiv Detail & Related papers (2024-05-09T14:56:49Z) - Structured Semidefinite Programming for Recovering Structured
Preconditioners [41.28701750733703]
We give an algorithm which, given positive definite $mathbfK in mathbbRd times d$ with $mathrmnnz(mathbfK)$ nonzero entries, computes an $epsilon$-optimal diagonal preconditioner in time.
We attain our results via new algorithms for a class of semidefinite programs we call matrix-dictionary approximation SDPs.
arXiv Detail & Related papers (2023-10-27T16:54:29Z) - An Oblivious Stochastic Composite Optimization Algorithm for Eigenvalue
Optimization Problems [76.2042837251496]
We introduce two oblivious mirror descent algorithms based on a complementary composite setting.
Remarkably, both algorithms work without prior knowledge of the Lipschitz constant or smoothness of the objective function.
We show how to extend our framework to scale and demonstrate the efficiency and robustness of our methods on large scale semidefinite programs.
arXiv Detail & Related papers (2023-06-30T08:34:29Z) - Fast $(1+\varepsilon)$-Approximation Algorithms for Binary Matrix
Factorization [54.29685789885059]
We introduce efficient $(1+varepsilon)$-approximation algorithms for the binary matrix factorization (BMF) problem.
The goal is to approximate $mathbfA$ as a product of low-rank factors.
Our techniques generalize to other common variants of the BMF problem.
arXiv Detail & Related papers (2023-06-02T18:55:27Z) - Efficient quantum algorithms for solving quantum linear system problems [0.0]
We present two quantum algorithms for solving the problem of finding the right singular vector with singular value zero of an augmented matrix $C$.
Both algorithms meet the optimal query complexity in $kappa $, and are simpler than previous algorithms.
arXiv Detail & Related papers (2022-08-14T02:49:26Z) - Decomposable Non-Smooth Convex Optimization with Nearly-Linear Gradient
Oracle Complexity [15.18055488087588]
We give an algorithm that minimizes the above convex formulation to $epsilon$-accuracy in $widetildeO(sum_i=1n d_i log (1 /epsilon))$ gradient computations.
Our main technical contribution is an adaptive procedure to select an $f_i$ term at every iteration via a novel combination of cutting-plane and interior-point methods.
arXiv Detail & Related papers (2022-08-07T20:53:42Z) - Sketching Algorithms and Lower Bounds for Ridge Regression [65.0720777731368]
We give a sketching-based iterative algorithm that computes $1+varepsilon$ approximate solutions for the ridge regression problem.
We also show that this algorithm can be used to give faster algorithms for kernel ridge regression.
arXiv Detail & Related papers (2022-04-13T22:18:47Z) - Learning Stochastic Shortest Path with Linear Function Approximation [74.08819218747341]
We study the shortest path (SSP) problem in reinforcement learning with linear function approximation, where the transition kernel is represented as a linear mixture of unknown models.
We propose a novel algorithm for learning the linear mixture SSP, which can attain a $tilde O(d B_star1.5sqrtK/c_min)$ regret.
arXiv Detail & Related papers (2021-10-25T08:34:00Z) - Global Convergence of Gradient Descent for Asymmetric Low-Rank Matrix
Factorization [49.090785356633695]
We study the asymmetric low-rank factorization problem: [mathbfU in mathbbRm min d, mathbfU$ and mathV$.
arXiv Detail & Related papers (2021-06-27T17:25:24Z) - Solving the Robust Matrix Completion Problem via a System of Nonlinear
Equations [28.83358353043287]
We consider the problem of robust matrix completion, which aims to recover a low rank matrix $L_*$ and a sparse matrix $S_*$ from incomplete observations of their sum $M=L_*+S_*inmathbbRmtimes n$.
The algorithm is highly parallelizable and suitable for large scale problems.
Numerical simulations show that the simple method works as expected and is comparable with state-of-the-art methods.
arXiv Detail & Related papers (2020-03-24T17:28:15Z) - Second-order Conditional Gradient Sliding [79.66739383117232]
We present the emphSecond-Order Conditional Gradient Sliding (SOCGS) algorithm.
The SOCGS algorithm converges quadratically in primal gap after a finite number of linearly convergent iterations.
It is useful when the feasible region can only be accessed efficiently through a linear optimization oracle.
arXiv Detail & Related papers (2020-02-20T17:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.