Linear Regression by Quantum Amplitude Estimation and its Extension to
Convex Optimization
- URL: http://arxiv.org/abs/2105.13511v2
- Date: Thu, 26 Aug 2021 01:58:15 GMT
- Title: Linear Regression by Quantum Amplitude Estimation and its Extension to
Convex Optimization
- Authors: Kazuya Kaneko, Koichi Miyamoto, Naoyuki Takeda, Kazuyoshi Yoshino
- Abstract summary: We propose a new quantum algorithm for linear regression, which has the complexity of $O(epsilon-1)$ and keeps the logarithmic dependence on the number of data points $N_D$.
In this method, we overcome bottleneck parts in the calculation, which take the form of the sum over data points and therefore have the complexity proportional to $N_D$.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Linear regression is a basic and widely-used methodology in data analysis. It
is known that some quantum algorithms efficiently perform least squares linear
regression of an exponentially large data set. However, if we obtain values of
the regression coefficients as classical data, the complexity of the existing
quantum algorithms can be larger than the classical method. This is because it
depends strongly on the tolerance error $\epsilon$: the best one among the
existing proposals is $O(\epsilon^{-2})$. In this paper, we propose the new
quantum algorithm for linear regression, which has the complexity of
$O(\epsilon^{-1})$ and keeps the logarithmic dependence on the number of data
points $N_D$. In this method, we overcome bottleneck parts in the calculation,
which take the form of the sum over data points and therefore have the
complexity proportional to $N_D$, using quantum amplitude estimation, and other
parts classically. Additionally, we generalize our method to some class of
convex optimization problems.
Related papers
- Data subsampling for Poisson regression with pth-root-link [53.63838219437508]
We develop and analyze data subsampling techniques for Poisson regression.
In particular, we consider the Poisson generalized linear model with ID- and square root-link functions.
arXiv Detail & Related papers (2024-10-30T10:09:05Z) - A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Adaptive Learning for Quantum Linear Regression [10.445957451908695]
In a recent work, linear regression was formulated as a quadratic binary optimization problem.
This approach promises a computational time advantage for large datasets.
However, the quality of the solution is limited by the necessary use of a precision vector.
In this work, we focus on the practical challenge of improving the precision vector encoding.
arXiv Detail & Related papers (2024-08-05T21:09:01Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Efficient and robust high-dimensional sparse logistic regression via
nonlinear primal-dual hybrid gradient algorithms [0.0]
We propose an iterative algorithm that provably computes a solution to a logistic regression problem regularized by an elastic net penalty.
This result improves on the known complexity bound of $O(min(m2n,mn2)log (1/epsilon))$ for first-order optimization methods.
arXiv Detail & Related papers (2021-11-30T14:16:48Z) - PROMPT: Parallel Iterative Algorithm for $\ell_{p}$ norm linear
regression via Majorization Minimization with an application to
semi-supervised graph learning [0.0]
We consider the problem of $ell_p$ norm linear regression, which has several applications such as in sparse recovery, data clustering, and semi-supervised learning.
We propose an iterative algorithm : Parallel IteRative AlgOrithM for $ell_P$ norm regression via MajorizaTion Minimization (PROMPT)
arXiv Detail & Related papers (2021-10-23T10:19:11Z) - Adiabatic Quantum Feature Selection for Sparse Linear Regression [0.17499351967216337]
We formulate and compare the quality of QUBO solution on synthetic and real world datasets.
The results demonstrate the effectiveness of the proposed adiabatic quantum computing approach in finding the optimal solution.
arXiv Detail & Related papers (2021-06-04T09:14:01Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z) - Learning nonlinear dynamical systems from a single trajectory [102.60042167341956]
We introduce algorithms for learning nonlinear dynamical systems of the form $x_t+1=sigma(Thetastarx_t)+varepsilon_t$.
We give an algorithm that recovers the weight matrix $Thetastar$ from a single trajectory with optimal sample complexity and linear running time.
arXiv Detail & Related papers (2020-04-30T10:42:48Z) - Efficient algorithms for multivariate shape-constrained convex
regression problems [9.281671380673306]
We prove that the least squares estimator is computable via solving a constrained convex programming (QP) problem with $(n+1)d$ variables and at least $n(n-1)$ linear inequality constraints.
For solving the generally very large-scale convex QP, we design two efficient algorithms, one is the symmetric Gauss-Seidel based alternating direction method of multipliers (tt sGS-ADMM), and the other is the proximal augmented Lagrangian method (tt pALM) with the subproblems solved by the semismooth Newton method (t
arXiv Detail & Related papers (2020-02-26T11:18:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.