Quantum Algorithms and Lower Bounds for Linear Regression with Norm
Constraints
- URL: http://arxiv.org/abs/2110.13086v1
- Date: Mon, 25 Oct 2021 16:26:37 GMT
- Title: Quantum Algorithms and Lower Bounds for Linear Regression with Norm
Constraints
- Authors: Yanlin Chen (QuSoft, CWI) and Ronald de Wolf (QuSoft, CWI and
Amsterdam)
- Abstract summary: We show that for Lasso we can get a quadratic quantum speedup in terms of $d$ by speeding up the cost-per-iteration of the Frank-Wolfe algorithm.
For Ridge the best quantum algorithms are linear in $d$, as are the best classical algorithms.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Lasso and Ridge are important minimization problems in machine learning and
statistics. They are versions of linear regression with squared loss where the
vector $\theta\in\mathbb{R}^d$ of coefficients is constrained in either
$\ell_1$-norm (for Lasso) or in $\ell_2$-norm (for Ridge). We study the
complexity of quantum algorithms for finding $\varepsilon$-minimizers for these
minimization problems. We show that for Lasso we can get a quadratic quantum
speedup in terms of $d$ by speeding up the cost-per-iteration of the
Frank-Wolfe algorithm, while for Ridge the best quantum algorithms are linear
in $d$, as are the best classical algorithms.
Related papers
- Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic
Shortest Path [80.60592344361073]
We study the Shortest Path (SSP) problem with a linear mixture transition kernel.
An agent repeatedly interacts with a environment and seeks to reach certain goal state while minimizing the cumulative cost.
Existing works often assume a strictly positive lower bound of the iteration cost function or an upper bound of the expected length for the optimal policy.
arXiv Detail & Related papers (2024-02-14T07:52:00Z) - Quantum Algorithms for the Pathwise Lasso [1.8058773918890538]
We present a novel quantum high-dimensional linear regression algorithm based on the classical LARS (Least Angle Regression) pathwise algorithm.
Our quantum algorithm provides the full regularisation path as the penalty term varies, but quadratically faster per iteration under specific conditions.
We prove, via an approximate version of the KKT conditions and a duality gap, that the LARS algorithm is robust to errors.
arXiv Detail & Related papers (2023-12-21T18:57:54Z) - Revisiting Quantum Algorithms for Linear Regressions: Quadratic Speedups
without Data-Dependent Parameters [10.602399256297032]
We develop a quantum algorithm that runs in $widetildeO(epsilon-1sqrtnd1.5) + mathrmpoly(d/epsilon)$ time.
It provides a quadratic quantum speedup in $n over the classical lower bound without any dependence on data-dependent parameters.
arXiv Detail & Related papers (2023-11-24T19:41:28Z) - A quantum central path algorithm for linear optimization [5.450016817940232]
We propose a novel quantum algorithm for solving linear optimization problems by quantum-mechanical simulation of the central path.
This approach yields an algorithm for solving linear optimization problems involving $m$ constraints and $n$ variables to $varepsilon$-optimality.
In the standard gate model (i.e., without access to quantum RAM), our algorithm can obtain highly-precise solutions to LO problems using at most $$mathcalO left( sqrtm + n textsfnnz (A) fracR_1
arXiv Detail & Related papers (2023-11-07T13:26:20Z) - Quantum speedups for linear programming via interior point methods [1.8434042562191815]
We describe a quantum algorithm for solving a linear program with $n$ inequality constraints on $d$ variables.
Our algorithm speeds up the Newton step in the state-of-the-art interior point method of Lee and Sidford.
arXiv Detail & Related papers (2023-11-06T16:00:07Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Efficient quantum linear solver algorithm with detailed running costs [0.0]
We introduce a quantum linear solver algorithm combining ideasdiabatic quantum computing with filtering techniques based on quantum signal processing.
Our protocol reduces the cost of quantum linear solvers over state-of-the-art close to an order of magnitude for early implementations.
arXiv Detail & Related papers (2023-05-19T00:07:32Z) - Pseudonorm Approachability and Applications to Regret Minimization [73.54127663296906]
We convert high-dimensional $ell_infty$-approachability problems to low-dimensional pseudonorm approachability problems.
We develop an algorithmic theory of pseudonorm approachability, analogous to previous work on approachability for $ell$ and other norms.
arXiv Detail & Related papers (2023-02-03T03:19:14Z) - Sketching Algorithms and Lower Bounds for Ridge Regression [65.0720777731368]
We give a sketching-based iterative algorithm that computes $1+varepsilon$ approximate solutions for the ridge regression problem.
We also show that this algorithm can be used to give faster algorithms for kernel ridge regression.
arXiv Detail & Related papers (2022-04-13T22:18:47Z) - Minimax Optimal Quantization of Linear Models: Information-Theoretic
Limits and Efficient Algorithms [59.724977092582535]
We consider the problem of quantizing a linear model learned from measurements.
We derive an information-theoretic lower bound for the minimax risk under this setting.
We show that our method and upper-bounds can be extended for two-layer ReLU neural networks.
arXiv Detail & Related papers (2022-02-23T02:39:04Z) - Learning nonlinear dynamical systems from a single trajectory [102.60042167341956]
We introduce algorithms for learning nonlinear dynamical systems of the form $x_t+1=sigma(Thetastarx_t)+varepsilon_t$.
We give an algorithm that recovers the weight matrix $Thetastar$ from a single trajectory with optimal sample complexity and linear running time.
arXiv Detail & Related papers (2020-04-30T10:42:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.