Quantum Kaczmarz Algorithm for Solving Linear Algebraic Equations
- URL: http://arxiv.org/abs/2601.01342v1
- Date: Sun, 04 Jan 2026 03:13:36 GMT
- Title: Quantum Kaczmarz Algorithm for Solving Linear Algebraic Equations
- Authors: Nhat A. Nghiem, Tuan K. Do, Trung V. Phan,
- Abstract summary: We introduce a quantum linear system solving algorithm based on the Kaczmarz method.<n>Its simplicity and low memory cost make it a practical choice across data regression, tomographic reconstruction, and optimization.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a quantum linear system solving algorithm based on the Kaczmarz method, a widely used workhorse for large linear systems and least-squares problems that updates the solution by enforcing one equation at a time. Its simplicity and low memory cost make it a practical choice across data regression, tomographic reconstruction, and optimization. In contrast to many existing quantum linear solvers, our method does not rely on oracle access to query entries, relaxing a key practicality bottleneck. In particular, when the rank of the system of interest is sufficiently small and the rows of the matrix of interest admit an appropriate structure, we achieve circuit complexity $\mathcal{O}\left(\frac{1}{\varepsilon}\log m\right)$, where $m$ is the number of variables and $\varepsilon$ is the target precision, without dependence on the sparsity $s$, and could possibly be without explicit dependence on condition number $κ$. This shows a significant improvement over previous quantum linear solvers where the dependence on $κ,s$ is at least linear. At the same time, when the rows have an arbitrary structure and have at most $s$ nonzero entries, we obtain the circuit depth $\mathcal{O}\left(\frac{1}{\varepsilon}\log s\right)$ using extra $\mathcal{O}(s)$ ancilla qubits, so the depth grows only logarithmically with sparsity $s$. When the sparsity $s$ grows as $\mathcal{O}(\log m)$, then our method can achieve an exponential improvement with respect to circuit depth compared to existing quantum algorithms, while using (asymptotically) the same amount of qubits.
Related papers
- An Efficient Decomposition of the Carleman Linearized Burgers' Equation [0.0]
We use the Carleman linearization method to map the nonlinear Burgers' equation into an infinite linear system of equations.<n>This new finite linear system is embedded into a larger system of equations with the key property that its matrix can be decomposed into a linear combination.<n>This is the first efficient data loading method of a Carleman linearized system.
arXiv Detail & Related papers (2025-05-01T04:18:28Z) - Quantum linear system algorithm with optimal queries to initial state preparation [0.0]
Quantum algorithms for linear systems produce the solution state $A-1|brangle$ by querying two oracles.
We present a quantum linear system algorithm making $mathbfThetaleft (1/sqrtpright)$ queries to $O_b$, which is optimal in the success probability.
In various applications such as solving differential equations, we can further improve the dependence on $p$ using a block preconditioning scheme.
arXiv Detail & Related papers (2024-10-23T18:00:01Z) - Fine-grained Analysis and Faster Algorithms for Iteratively Solving Linear Systems [9.30306458153248]
Despite being a key bottleneck in many machine learning tasks, the cost of solving large linear systems has proven challenging to quantify.<n>We consider a fine-grained notion of complexity for solving linear systems, which is motivated by applications where the data exhibits low-dimensional structure.<n>We give a algorithm based on the Sketch-and-Project paradigm, that solves the linear system $Ax = b$, that is, finds $barx$ such that $|Abarx - b| le epsilon |b|$, in time
arXiv Detail & Related papers (2024-05-09T14:56:49Z) - A quantum central path algorithm for linear optimization [5.450016817940232]
We propose a novel quantum algorithm for solving linear optimization problems by quantum-mechanical simulation of the central path.
This approach yields an algorithm for solving linear optimization problems involving $m$ constraints and $n$ variables to $varepsilon$-optimality.
In the standard gate model (i.e., without access to quantum RAM), our algorithm can obtain highly-precise solutions to LO problems using at most $$mathcalO left( sqrtm + n textsfnnz (A) fracR_1
arXiv Detail & Related papers (2023-11-07T13:26:20Z) - Randomized adiabatic quantum linear solver algorithm with optimal complexity scaling and detailed running costs [0.0]
We develop a quantum linear solver algorithm based on adiabatic quantum computing.<n>The algorithm is improved to the optimal scaling $O(kappa/log$)$ - an exponential improvement in $epsilon$.<n>We introduce a cheaper randomized walk operator method replacing Hamiltonian simulation.
arXiv Detail & Related papers (2023-05-19T00:07:32Z) - Spacetime-Efficient Low-Depth Quantum State Preparation with
Applications [93.56766264306764]
We show that a novel deterministic method for preparing arbitrary quantum states requires fewer quantum resources than previous methods.
We highlight several applications where this ability would be useful, including quantum machine learning, Hamiltonian simulation, and solving linear systems of equations.
arXiv Detail & Related papers (2023-03-03T18:23:20Z) - Quantum Resources Required to Block-Encode a Matrix of Classical Data [56.508135743727934]
We provide circuit-level implementations and resource estimates for several methods of block-encoding a dense $Ntimes N$ matrix of classical data to precision $epsilon$.
We examine resource tradeoffs between the different approaches and explore implementations of two separate models of quantum random access memory (QRAM)
Our results go beyond simple query complexity and provide a clear picture into the resource costs when large amounts of classical data are assumed to be accessible to quantum algorithms.
arXiv Detail & Related papers (2022-06-07T18:00:01Z) - Sketching Algorithms and Lower Bounds for Ridge Regression [65.0720777731368]
We give a sketching-based iterative algorithm that computes $1+varepsilon$ approximate solutions for the ridge regression problem.
We also show that this algorithm can be used to give faster algorithms for kernel ridge regression.
arXiv Detail & Related papers (2022-04-13T22:18:47Z) - Quantum-classical algorithms for skewed linear systems with optimized
Hadamard test [10.386115383285288]
We discuss hybrid quantum-classical algorithms for skewed linear systems for over-determined and under-determined cases.
Our input model is such that the columns or rows of the matrix defining the linear system are given via quantum circuits of poly-logarithmic depth.
We present an algorithm for the special case of a factorized linear system with run time poly-logarithmic in the respective dimensions.
arXiv Detail & Related papers (2020-09-28T12:59:27Z) - Hybrid Stochastic-Deterministic Minibatch Proximal Gradient:
Less-Than-Single-Pass Optimization with Nearly Optimal Generalization [83.80460802169999]
We show that HSDMPG can attain an $mathcalObig (1/sttnbig)$ which is at the order of excess error on a learning model.
For loss factors, we prove that HSDMPG can attain an $mathcalObig (1/sttnbig)$ which is at the order of excess error on a learning model.
arXiv Detail & Related papers (2020-09-18T02:18:44Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z) - Learning nonlinear dynamical systems from a single trajectory [102.60042167341956]
We introduce algorithms for learning nonlinear dynamical systems of the form $x_t+1=sigma(Thetastarx_t)+varepsilon_t$.
We give an algorithm that recovers the weight matrix $Thetastar$ from a single trajectory with optimal sample complexity and linear running time.
arXiv Detail & Related papers (2020-04-30T10:42:48Z) - Second-order Conditional Gradient Sliding [70.88478428882871]
We present the emphSecond-Order Conditional Gradient Sliding (SOCGS) algorithm.<n>The SOCGS algorithm converges quadratically in primal gap after a finite number of linearly convergent iterations.<n>It is useful when the feasible region can only be accessed efficiently through a linear optimization oracle.
arXiv Detail & Related papers (2020-02-20T17:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.