Almost Optimal Proper Learning and Testing Polynomials
- URL: http://arxiv.org/abs/2202.03207v1
- Date: Mon, 7 Feb 2022 14:15:20 GMT
- Title: Almost Optimal Proper Learning and Testing Polynomials
- Authors: Nader H. Bshouty
- Abstract summary: Our algorithm makes $$q_U=left(fracsepsilonright)fraclog betabeta+O(frac1beta)+ tilde Oleft(logfrac1epsilonright)log n,$$.
All previous algorithms have query complexity at least quadratic in $s$ and linear in $1/epsilon$.
- Score: 0.11421942894219898
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We give the first almost optimal polynomial-time proper learning algorithm of
Boolean sparse multivariate polynomial under the uniform distribution. For
$s$-sparse polynomial over $n$ variables and $\epsilon=1/s^\beta$, $\beta>1$,
our algorithm makes $$q_U=\left(\frac{s}{\epsilon}\right)^{\frac{\log
\beta}{\beta}+O(\frac{1}{\beta})}+ \tilde
O\left(s\right)\left(\log\frac{1}{\epsilon}\right)\log n$$ queries. Notice that
our query complexity is sublinear in $1/\epsilon$ and almost linear in $s$. All
previous algorithms have query complexity at least quadratic in $s$ and linear
in $1/\epsilon$.
We then prove the almost tight lower bound
$$q_L=\left(\frac{s}{\epsilon}\right)^{\frac{\log
\beta}{\beta}+\Omega(\frac{1}{\beta})}+
\Omega\left(s\right)\left(\log\frac{1}{\epsilon}\right)\log n,$$
Applying the reduction in~\cite{Bshouty19b} with the above algorithm, we give
the first almost optimal polynomial-time tester for $s$-sparse polynomial. Our
tester, for $\beta>3.404$, makes $$\tilde O\left(\frac{s}{\epsilon}\right)$$
queries.
Related papers
- The Communication Complexity of Approximating Matrix Rank [50.6867896228563]
We show that this problem has randomized communication complexity $Omega(frac1kcdot n2log|mathbbF|)$.
As an application, we obtain an $Omega(frac1kcdot n2log|mathbbF|)$ space lower bound for any streaming algorithm with $k$ passes.
arXiv Detail & Related papers (2024-10-26T06:21:42Z) - On the Complexity of Finite-Sum Smooth Optimization under the
Polyak-{\L}ojasiewicz Condition [14.781921087738967]
This paper considers the optimization problem of the form $min_bf xinmathbb Rd f(bf x)triangleq frac1nsum_i=1n f_i(bf x)$, where $f(cdot)$ satisfies the Polyak--Lojasiewicz (PL) condition with parameter $mu$ and $f_i(cdot)_i=1n$ is $L$-mean-squared smooth.
arXiv Detail & Related papers (2024-02-04T17:14:53Z) - Quantum and classical query complexities of functions of matrices [0.0]
We show that for any continuous function $f(x):[-1,1]rightarrow [-1,1]$, the quantum query complexity of computing $brai f(A) ketjpm varepsilon/4$ is lower bounded by $Omega(widetildedeg_varepsilon(f))$.
arXiv Detail & Related papers (2023-11-13T00:45:41Z) - Krylov Methods are (nearly) Optimal for Low-Rank Approximation [8.017116107657206]
We show that any algorithm requires $Omegaleft(log(n)/varepsilon1/2right)$ matrix-vector products, exactly matching the upper bound obtained by Krylov methods.
Our lower bound addresses Open Question 1WooWoo14, providing evidence for the lack of progress on algorithms for Spectral LRA.
arXiv Detail & Related papers (2023-04-06T16:15:19Z) - The Approximate Degree of DNF and CNF Formulas [95.94432031144716]
For every $delta>0,$ we construct CNF and formulas of size with approximate degree $Omega(n1-delta),$ essentially matching the trivial upper bound of $n.
We show that for every $delta>0$, these models require $Omega(n1-delta)$, $Omega(n/4kk2)1-delta$, and $Omega(n/4kk2)1-delta$, respectively.
arXiv Detail & Related papers (2022-09-04T10:01:39Z) - Optimal Clustering with Noisy Queries via Multi-Armed Bandit [19.052525950282234]
Motivated by many applications, we study clustering with a faulty oracle.
We propose a new time algorithm with $O(fracn)delta2 + textpoly(k,frac1delta, log n)$ queries.
Our main ingredient is an interesting connection between our problem and multi-armed bandit.
arXiv Detail & Related papers (2022-07-12T08:17:29Z) - Low-degree learning and the metric entropy of polynomials [44.99833362998488]
We prove that any (deterministic or randomized) algorithm which learns $mathscrF_nd$ with $L$-accuracy $varepsilon$ requires at least $Omega(sqrtvarepsilon)2dlog n leq log mathsfM(mathscrF_n,d,|cdot|_L,varepsilon) satisfies the two-sided estimate $$c (1-varepsilon)2dlog
arXiv Detail & Related papers (2022-03-17T23:52:08Z) - Low-Rank Approximation with $1/\epsilon^{1/3}$ Matrix-Vector Products [58.05771390012827]
We study iterative methods based on Krylov subspaces for low-rank approximation under any Schatten-$p$ norm.
Our main result is an algorithm that uses only $tildeO(k/sqrtepsilon)$ matrix-vector products.
arXiv Detail & Related papers (2022-02-10T16:10:41Z) - Active Sampling for Linear Regression Beyond the $\ell_2$ Norm [70.49273459706546]
We study active sampling algorithms for linear regression, which aim to query only a small number of entries of a target vector.
We show that this dependence on $d$ is optimal, up to logarithmic factors.
We also provide the first total sensitivity upper bound $O(dmax1,p/2log2 n)$ for loss functions with at most degree $p$ growth.
arXiv Detail & Related papers (2021-11-09T00:20:01Z) - An Optimal Separation of Randomized and Quantum Query Complexity [67.19751155411075]
We prove that for every decision tree, the absolute values of the Fourier coefficients of a given order $ellsqrtbinomdell (1+log n)ell-1,$ sum to at most $cellsqrtbinomdell (1+log n)ell-1,$ where $n$ is the number of variables, $d$ is the tree depth, and $c>0$ is an absolute constant.
arXiv Detail & Related papers (2020-08-24T06:50:57Z) - Optimal $\ell_1$ Column Subset Selection and a Fast PTAS for Low Rank
Approximation [0.0]
We give the first time column subset selection-based $ell_p$ low rank approximation algorithm sampling $tildeO(k)$ columns.
We extend our results to obtain tight upper and lower bounds for column subset selection-based $ell_p$ low rank approximation for any $1 p 2$.
arXiv Detail & Related papers (2020-07-20T17:50:30Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z) - Revisiting EXTRA for Smooth Distributed Optimization [70.65867695317633]
We give a sharp complexity analysis for EXTRA with the improved $Oleft(left(fracLmu+frac11-sigma_2(W)right)logfrac1epsilon (1-sigma_2(W))right)$.
Our communication complexities of the accelerated EXTRA are only worse by the factors of $left(logfracLmu (1-sigma_2(W))right)$ and $left(logfrac1epsilon (1-
arXiv Detail & Related papers (2020-02-24T08:07:08Z) - Quantum algorithms for the Goldreich-Levin learning problem [3.8849433921565284]
Goldreich-Levin algorithm was originally proposed for a cryptographic purpose and then applied to learning.
It takes a $poly(n,frac1epsilonlogfrac1delta)$ time to output the vectors $w$ with Walsh coefficients $S(w)geqepsilon$ with probability at least $1-delta$.
In this paper, a quantum algorithm for this problem is given with query complexity $O(fraclogfrac1deltaepsilon4)$, which is independent of $n$.
arXiv Detail & Related papers (2019-12-31T14:52:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.