Revisiting Quantum Algorithms for Linear Regressions: Quadratic Speedups
without Data-Dependent Parameters
- URL: http://arxiv.org/abs/2311.14823v1
- Date: Fri, 24 Nov 2023 19:41:28 GMT
- Title: Revisiting Quantum Algorithms for Linear Regressions: Quadratic Speedups
without Data-Dependent Parameters
- Authors: Zhao Song, Junze Yin, Ruizhe Zhang
- Abstract summary: We develop a quantum algorithm that runs in $widetildeO(epsilon-1sqrtnd1.5) + mathrmpoly(d/epsilon)$ time.
It provides a quadratic quantum speedup in $n over the classical lower bound without any dependence on data-dependent parameters.
- Score: 10.602399256297032
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Linear regression is one of the most fundamental linear algebra problems.
Given a dense matrix $A \in \mathbb{R}^{n \times d}$ and a vector $b$, the goal
is to find $x'$ such that
$ \| Ax' - b \|_2^2 \leq (1+\epsilon) \min_{x} \| A x - b \|_2^2 $. The best
classical algorithm takes $O(nd) + \mathrm{poly}(d/\epsilon)$ time [Clarkson
and Woodruff STOC 2013, Nelson and Nguyen FOCS 2013]. On the other hand,
quantum linear regression algorithms can achieve exponential quantum speedups,
as shown in [Wang Phys. Rev. A 96, 012335, Kerenidis and Prakash ITCS 2017,
Chakraborty, Gily{\'e}n and Jeffery ICALP 2019]. However, the running times of
these algorithms depend on some quantum linear algebra-related parameters, such
as $\kappa(A)$, the condition number of $A$. In this work, we develop a quantum
algorithm that runs in $\widetilde{O}(\epsilon^{-1}\sqrt{n}d^{1.5}) +
\mathrm{poly}(d/\epsilon)$ time. It provides a quadratic quantum speedup in $n$
over the classical lower bound without any dependence on data-dependent
parameters. In addition, we also show our result can be generalized to multiple
regression and ridge linear regression.
Related papers
- Optimal Sketching for Residual Error Estimation for Matrix and Vector Norms [50.15964512954274]
We study the problem of residual error estimation for matrix and vector norms using a linear sketch.
We demonstrate that this gives a substantial advantage empirically, for roughly the same sketch size and accuracy as in previous work.
We also show an $Omega(k2/pn1-2/p)$ lower bound for the sparse recovery problem, which is tight up to a $mathrmpoly(log n)$ factor.
arXiv Detail & Related papers (2024-08-16T02:33:07Z) - Quantum speedups for linear programming via interior point methods [1.8434042562191815]
We describe a quantum algorithm for solving a linear program with $n$ inequality constraints on $d$ variables.
Our algorithm speeds up the Newton step in the state-of-the-art interior point method of Lee and Sidford.
arXiv Detail & Related papers (2023-11-06T16:00:07Z) - Do you know what q-means? [50.045011844765185]
Clustering is one of the most important tools for analysis of large datasets.
We present an improved version of the "$q$-means" algorithm for clustering.
We also present a "dequantized" algorithm for $varepsilon which runs in $Obig(frack2varepsilon2(sqrtkd + log(Nd))big.
arXiv Detail & Related papers (2023-08-18T17:52:12Z) - Fast and Practical Quantum-Inspired Classical Algorithms for Solving
Linear Systems [11.929584800629673]
We propose fast and practical quantum-inspired classical algorithms for solving linear systems.
Our main contribution is the application of the heavy ball momentum method to quantum-inspired classical algorithms for solving linear systems.
arXiv Detail & Related papers (2023-07-13T08:46:19Z) - Sketching Algorithms and Lower Bounds for Ridge Regression [65.0720777731368]
We give a sketching-based iterative algorithm that computes $1+varepsilon$ approximate solutions for the ridge regression problem.
We also show that this algorithm can be used to give faster algorithms for kernel ridge regression.
arXiv Detail & Related papers (2022-04-13T22:18:47Z) - Active Sampling for Linear Regression Beyond the $\ell_2$ Norm [70.49273459706546]
We study active sampling algorithms for linear regression, which aim to query only a small number of entries of a target vector.
We show that this dependence on $d$ is optimal, up to logarithmic factors.
We also provide the first total sensitivity upper bound $O(dmax1,p/2log2 n)$ for loss functions with at most degree $p$ growth.
arXiv Detail & Related papers (2021-11-09T00:20:01Z) - Quantum Algorithms and Lower Bounds for Linear Regression with Norm
Constraints [0.0]
We show that for Lasso we can get a quadratic quantum speedup in terms of $d$ by speeding up the cost-per-iteration of the Frank-Wolfe algorithm.
For Ridge the best quantum algorithms are linear in $d$, as are the best classical algorithms.
arXiv Detail & Related papers (2021-10-25T16:26:37Z) - Small Covers for Near-Zero Sets of Polynomials and Learning Latent
Variable Models [56.98280399449707]
We show that there exists an $epsilon$-cover for $S$ of cardinality $M = (k/epsilon)O_d(k1/d)$.
Building on our structural result, we obtain significantly improved learning algorithms for several fundamental high-dimensional probabilistic models hidden variables.
arXiv Detail & Related papers (2020-12-14T18:14:08Z) - Sublinear classical and quantum algorithms for general matrix games [11.339580074756189]
We investigate sublinear classical and quantum algorithms for matrix games.
For any fixed $qin (1,2), we solve the matrix game where $mathcalX$ is a $ell_q$-norm unit ball within additive error.
We also provide a corresponding sublinear quantum algorithm that solves the same task in time.
arXiv Detail & Related papers (2020-12-11T17:36:33Z) - An improved quantum-inspired algorithm for linear regression [15.090593955414137]
We give a classical algorithm for linear regression analogous to the quantum matrix inversion algorithm.
We show that quantum computers can achieve at most a factor-of-12 speedup for linear regression in this QRAM data structure setting.
arXiv Detail & Related papers (2020-09-15T17:58:25Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.