A Quadratic Sample Complexity Reduction for Agnostic Learning via Quantum Algorithms
- URL: http://arxiv.org/abs/2310.15576v8
- Date: Fri, 19 Apr 2024 02:30:08 GMT
- Title: A Quadratic Sample Complexity Reduction for Agnostic Learning via Quantum Algorithms
- Authors: Daniel Z. Zanger,
- Abstract summary: We find a new sample complexity upper bound of $O(mboxlog(frac1delta)/epsilon) as $epsilon,deltarightarrow 0arrow.
For general learning, the quantum speedup in the rate of learning is quadratic in $epsilon-1$.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Using quantum algorithms, we obtain, for accuracy $\epsilon>0$ and confidence $1-\delta,0<\delta<1,$ a new sample complexity upper bound of $O((\mbox{log}(\frac{1}{\delta}))/\epsilon)$ as $\epsilon,\delta\rightarrow 0$ for a general agnostic learning model, provided the hypothesis class is of finite cardinality. This greatly improves upon a corresponding sample complexity of asymptotic order $\Theta((\mbox{log}(\frac{1}{\delta}))/\epsilon^{2})$ known in the literature to be attainable by means of classical (non-quantum) algorithms for an agnostic learning problem also with hypothesis set of finite cardinality (see, for example, Arunachalam and de Wolf (2018) and the classical statistical learning theory references cited there). Thus, for general agnostic learning, the quantum speedup in the rate of learning that we achieve with respect to these results is quadratic in $\epsilon^{-1}$.
Related papers
- Quantum spectral method for gradient and Hessian estimation [4.193480001271463]
Gradient descent is one of the most basic algorithms for solving continuous optimization problems.
We propose a quantum algorithm that returns an $varepsilon$-approximation of its gradient with query complexity $widetildeO (1/varepsilon)$.
We also propose two quantum algorithms for Hessian estimation, aiming to improve quantum analogs of Newton's method.
arXiv Detail & Related papers (2024-07-04T11:03:48Z) - Agnostically Learning Multi-index Models with Queries [54.290489524576756]
We study the power of query access for the task of agnostic learning under the Gaussian distribution.
We show that query access gives significant runtime improvements over random examples for agnostically learning MIMs.
arXiv Detail & Related papers (2023-12-27T15:50:47Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - Information-Computation Tradeoffs for Learning Margin Halfspaces with
Random Classification Noise [50.64137465792738]
We study the problem of PAC $gamma$-margin halfspaces with Random Classification Noise.
We establish an information-computation tradeoff suggesting an inherent gap between the sample complexity of the problem and the sample complexity of computationally efficient algorithms.
arXiv Detail & Related papers (2023-06-28T16:33:39Z) - Succinct quantum testers for closeness and $k$-wise uniformity of probability distributions [2.3466828785520373]
We explore potential quantum speedups for the fundamental problem of testing the properties of closeness and $k$-wise uniformity of probability distributions.
We show that the quantum query complexities for $ell1$- and $ell2$-closeness testing are $O(sqrtn/varepsilon)$ and $O(sqrtnk/varepsilon)$.
We propose the first quantum algorithm for this problem with query complexity $O(sqrtnk/varepsilon)
arXiv Detail & Related papers (2023-04-25T15:32:37Z) - Quantum Algorithms for Sampling Log-Concave Distributions and Estimating
Normalizing Constants [8.453228628258778]
We develop quantum algorithms for sampling logconcave distributions and for estimating their normalizing constants.
We exploit quantum analogs of the Monte Carlo method and quantum walks.
We also prove a $1/epsilon1-o(1)$ quantum lower bound for estimating normalizing constants.
arXiv Detail & Related papers (2022-10-12T19:10:43Z) - Learning many-body Hamiltonians with Heisenberg-limited scaling [3.460138063155115]
We propose the first algorithm to achieve the Heisenberg limit for learning interacting $N$-qubit local Hamiltonian.
After a total evolution time of $mathcalO(epsilon-1)$, the proposed algorithm can efficiently estimate any parameter in the $N$-qubit Hamiltonian to $epsilon$-error with high probability.
arXiv Detail & Related papers (2022-10-06T16:30:51Z) - Quantum Resources Required to Block-Encode a Matrix of Classical Data [56.508135743727934]
We provide circuit-level implementations and resource estimates for several methods of block-encoding a dense $Ntimes N$ matrix of classical data to precision $epsilon$.
We examine resource tradeoffs between the different approaches and explore implementations of two separate models of quantum random access memory (QRAM)
Our results go beyond simple query complexity and provide a clear picture into the resource costs when large amounts of classical data are assumed to be accessible to quantum algorithms.
arXiv Detail & Related papers (2022-06-07T18:00:01Z) - Enhancing the Quantum Linear Systems Algorithm using Richardson
Extrapolation [0.8057006406834467]
We present a quantum algorithm to solve systems of linear equations of the form $Amathbfx=mathbfb$.
The algorithm achieves an exponential improvement with respect to $N$ over classical methods.
arXiv Detail & Related papers (2020-09-09T18:00:09Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.