Quantum Boosting
- URL: http://arxiv.org/abs/2002.05056v2
- Date: Fri, 14 Aug 2020 23:06:52 GMT
- Title: Quantum Boosting
- Authors: Srinivasan Arunachalam, Reevu Maity
- Abstract summary: Boosting is a technique that converts a weak and inaccurate machine learning algorithm into a strong accurate learning algorithm.
We show how quantum techniques can improve the time complexity of classical AdaBoost.
- Score: 8.93449755281201
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Suppose we have a weak learning algorithm $\mathcal{A}$ for a Boolean-valued
problem: $\mathcal{A}$ produces hypotheses whose bias $\gamma$ is small, only
slightly better than random guessing (this could, for instance, be due to
implementing $\mathcal{A}$ on a noisy device), can we boost the performance of
$\mathcal{A}$ so that $\mathcal{A}$'s output is correct on $2/3$ of the inputs?
Boosting is a technique that converts a weak and inaccurate machine learning
algorithm into a strong accurate learning algorithm. The AdaBoost algorithm by
Freund and Schapire (for which they were awarded the G\"odel prize in 2003) is
one of the widely used boosting algorithms, with many applications in theory
and practice. Suppose we have a $\gamma$-weak learner for a Boolean concept
class $C$ that takes time $R(C)$, then the time complexity of AdaBoost scales
as $VC(C)\cdot poly(R(C), 1/\gamma)$, where $VC(C)$ is the $VC$-dimension of
$C$. In this paper, we show how quantum techniques can improve the time
complexity of classical AdaBoost. To this end, suppose we have a $\gamma$-weak
quantum learner for a Boolean concept class $C$ that takes time $Q(C)$, we
introduce a quantum boosting algorithm whose complexity scales as
$\sqrt{VC(C)}\cdot poly(Q(C),1/\gamma);$ thereby achieving a quadratic quantum
improvement over classical AdaBoost in terms of $VC(C)$.
Related papers
- Quantum spectral method for gradient and Hessian estimation [4.193480001271463]
Gradient descent is one of the most basic algorithms for solving continuous optimization problems.
We propose a quantum algorithm that returns an $varepsilon$-approximation of its gradient with query complexity $widetildeO (1/varepsilon)$.
We also propose two quantum algorithms for Hessian estimation, aiming to improve quantum analogs of Newton's method.
arXiv Detail & Related papers (2024-07-04T11:03:48Z) - The Cost of Parallelizing Boosting [1.9235143628887907]
We study the cost of parallelizing weak-to-strong boosting algorithms for learning.
We show that even "slight" parallelization of boosting requires an exponential blow-up in the complexity of training.
arXiv Detail & Related papers (2024-02-23T07:03:52Z) - A Scalable Algorithm for Individually Fair K-means Clustering [77.93955971520549]
We present a scalable algorithm for the individually fair ($p$, $k$)-clustering problem introduced by Jung et al. and Mahabadi et al.
A clustering is then called individually fair if it has centers within distance $delta(x)$ of $x$ for each $xin P$.
We show empirically that not only is our algorithm much faster than prior work, but it also produces lower-cost solutions.
arXiv Detail & Related papers (2024-02-09T19:01:48Z) - Do you know what q-means? [50.045011844765185]
Clustering is one of the most important tools for analysis of large datasets.
We present an improved version of the "$q$-means" algorithm for clustering.
We also present a "dequantized" algorithm for $varepsilon which runs in $Obig(frack2varepsilon2(sqrtkd + log(Nd))big.
arXiv Detail & Related papers (2023-08-18T17:52:12Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Fast and Practical Quantum-Inspired Classical Algorithms for Solving
Linear Systems [11.929584800629673]
We propose fast and practical quantum-inspired classical algorithms for solving linear systems.
Our main contribution is the application of the heavy ball momentum method to quantum-inspired classical algorithms for solving linear systems.
arXiv Detail & Related papers (2023-07-13T08:46:19Z) - Mind the gap: Achieving a super-Grover quantum speedup by jumping to the
end [114.3957763744719]
We present a quantum algorithm that has rigorous runtime guarantees for several families of binary optimization problems.
We show that the algorithm finds the optimal solution in time $O*(2(0.5-c)n)$ for an $n$-independent constant $c$.
We also show that for a large fraction of random instances from the $k$-spin model and for any fully satisfiable or slightly frustrated $k$-CSP formula, statement (a) is the case.
arXiv Detail & Related papers (2022-12-03T02:45:23Z) - Learning a Latent Simplex in Input-Sparsity Time [58.30321592603066]
We consider the problem of learning a latent $k$-vertex simplex $KsubsetmathbbRdtimes n$, given access to $AinmathbbRdtimes n$.
We show that the dependence on $k$ in the running time is unnecessary given a natural assumption about the mass of the top $k$ singular values of $A$.
arXiv Detail & Related papers (2021-05-17T16:40:48Z) - Quantum learning algorithms imply circuit lower bounds [7.970954821067043]
We establish the first general connection between the design of quantum algorithms and circuit lower bounds.
Our proof builds on several works in learning theory, pseudorandomness, and computational complexity.
arXiv Detail & Related papers (2020-12-03T14:03:20Z) - Sample Efficient Reinforcement Learning via Low-Rank Matrix Estimation [30.137884459159107]
We consider the question of learning $Q$-function in a sample efficient manner for reinforcement learning with continuous state and action spaces.
We develop a simple, iterative learning algorithm that finds $epsilon$-Schmidt $Q$-function with sample complexity of $widetildeO(frac1epsilonmax(d1), d_2)+2)$ when the optimal $Q$-function has low rank $r$ and the factor $$ is below a certain threshold.
arXiv Detail & Related papers (2020-06-11T00:55:35Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.