Improved quantum algorithm for A-optimal projection
- URL: http://arxiv.org/abs/2006.05745v2
- Date: Thu, 5 Nov 2020 08:31:15 GMT
- Title: Improved quantum algorithm for A-optimal projection
- Authors: Shi-Jie Pan, Lin-Chun Wan, Hai-Ling Liu, Qing-Le Wang, Su-Juan Qin,
Qiao-Yan Wen and Fei Gao
- Abstract summary: This paper corrects the time complexity of Duan emphet al.'s algorithm to $(frackappa4ssqrtks epsilonsmathrmpolylog)$.
Our algorithm achieves at least a speedup compared to Duan emphet al.'s algorithm.
- Score: 4.248054546466641
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dimensionality reduction (DR) algorithms, which reduce the dimensionality of
a given data set while preserving the information of the original data set as
well as possible, play an important role in machine learning and data mining.
Duan \emph{et al}. proposed a quantum version of the A-optimal projection
algorithm (AOP) for dimensionality reduction [Phys. Rev. A 99, 032311 (2019)]
and claimed that the algorithm has exponential speedups on the dimensionality
of the original feature space $n$ and the dimensionality of the reduced feature
space $k$ over the classical algorithm. In this paper, we correct the time
complexity of Duan \emph{et al}.'s algorithm to $O(\frac{\kappa^{4s}\sqrt{k^s}}
{\epsilon^{s}}\mathrm{polylog}^s (\frac{mn}{\epsilon}))$, where $\kappa$ is the
condition number of a matrix that related to the original data set, $s$ is the
number of iterations, $m$ is the number of data points and $\epsilon$ is the
desired precision of the output state. Since the time complexity has an
exponential dependence on $s$, the quantum algorithm can only be beneficial for
high dimensional problems with a small number of iterations $s$. To get a
further speedup, we propose an improved quantum AOP algorithm with time
complexity $O(\frac{s \kappa^6
\sqrt{k}}{\epsilon}\mathrm{polylog}(\frac{nm}{\epsilon}) + \frac{s^2
\kappa^4}{\epsilon}\mathrm{polylog}(\frac{\kappa k}{\epsilon}))$ and space
complexity $O(\log_2(nk/\epsilon)+s)$. With space complexity slightly worse,
our algorithm achieves at least a polynomial speedup compared to Duan \emph{et
al}.'s algorithm. Also, our algorithm shows exponential speedups in $n$ and $m$
compared with the classical algorithm when both $\kappa$, $k$ and $1/\epsilon$
are $O(\mathrm{polylog}(nm))$.
Related papers
- Quantum spectral method for gradient and Hessian estimation [4.193480001271463]
Gradient descent is one of the most basic algorithms for solving continuous optimization problems.
We propose a quantum algorithm that returns an $varepsilon$-approximation of its gradient with query complexity $widetildeO (1/varepsilon)$.
We also propose two quantum algorithms for Hessian estimation, aiming to improve quantum analogs of Newton's method.
arXiv Detail & Related papers (2024-07-04T11:03:48Z) - Do you know what q-means? [50.045011844765185]
Clustering is one of the most important tools for analysis of large datasets.
We present an improved version of the "$q$-means" algorithm for clustering.
We also present a "dequantized" algorithm for $varepsilon which runs in $Obig(frack2varepsilon2(sqrtkd + log(Nd))big.
arXiv Detail & Related papers (2023-08-18T17:52:12Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - Fast and Practical Quantum-Inspired Classical Algorithms for Solving
Linear Systems [11.929584800629673]
We propose fast and practical quantum-inspired classical algorithms for solving linear systems.
Our main contribution is the application of the heavy ball momentum method to quantum-inspired classical algorithms for solving linear systems.
arXiv Detail & Related papers (2023-07-13T08:46:19Z) - Fast and Efficient Matching Algorithm with Deadline Instances [7.613259578185218]
We introduce a market model with $mathrmdeadline$ first.
We present our two optimized algorithms (textscFastGreedy and textscFastPostponedGreedy)
At the same time, our algorithms run faster than the original two algorithms.
arXiv Detail & Related papers (2023-05-15T05:21:20Z) - Private estimation algorithms for stochastic block models and mixture
models [63.07482515700984]
General tools for designing efficient private estimation algorithms.
First efficient $(epsilon, delta)$-differentially private algorithm for both weak recovery and exact recovery.
arXiv Detail & Related papers (2023-01-11T09:12:28Z) - Mind the gap: Achieving a super-Grover quantum speedup by jumping to the
end [114.3957763744719]
We present a quantum algorithm that has rigorous runtime guarantees for several families of binary optimization problems.
We show that the algorithm finds the optimal solution in time $O*(2(0.5-c)n)$ for an $n$-independent constant $c$.
We also show that for a large fraction of random instances from the $k$-spin model and for any fully satisfiable or slightly frustrated $k$-CSP formula, statement (a) is the case.
arXiv Detail & Related papers (2022-12-03T02:45:23Z) - Quantum speedups for dynamic programming on $n$-dimensional lattice
graphs [0.11470070927586015]
We show a quantum algorithm with complexity $widetilde O(T_Dn)$, where $T_D D+1$.
While the best known classical algorithm is $textpoly(m,n)log n T_Dn$, the time complexity of our quantum algorithm is $textpoly(m,n)log n T_Dn$.
arXiv Detail & Related papers (2021-04-29T14:50:03Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z) - An Improved Cutting Plane Method for Convex Optimization, Convex-Concave
Games and its Applications [28.54236118020831]
We propose a new cutting plane algorithm that uses an optimal $O(n log (kappa))$ evaluations of the oracle.
We also provide evidence that the $n2$ time per evaluation cannot be improved and thus our running time is optimal.
arXiv Detail & Related papers (2020-04-08T20:56:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.