Small Covers for Near-Zero Sets of Polynomials and Learning Latent
Variable Models
- URL: http://arxiv.org/abs/2012.07774v1
- Date: Mon, 14 Dec 2020 18:14:08 GMT
- Title: Small Covers for Near-Zero Sets of Polynomials and Learning Latent
Variable Models
- Authors: Ilias Diakonikolas and Daniel M. Kane
- Abstract summary: We show that there exists an $epsilon$-cover for $S$ of cardinality $M = (k/epsilon)O_d(k1/d)$.
Building on our structural result, we obtain significantly improved learning algorithms for several fundamental high-dimensional probabilistic models hidden variables.
- Score: 56.98280399449707
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Let $V$ be any vector space of multivariate degree-$d$ homogeneous
polynomials with co-dimension at most $k$, and $S$ be the set of points where
all polynomials in $V$ {\em nearly} vanish. We establish a qualitatively
optimal upper bound on the size of $\epsilon$-covers for $S$, in the
$\ell_2$-norm. Roughly speaking, we show that there exists an $\epsilon$-cover
for $S$ of cardinality $M = (k/\epsilon)^{O_d(k^{1/d})}$. Our result is
constructive yielding an algorithm to compute such an $\epsilon$-cover that
runs in time $\mathrm{poly}(M)$.
Building on our structural result, we obtain significantly improved learning
algorithms for several fundamental high-dimensional probabilistic models with
hidden variables. These include density and parameter estimation for
$k$-mixtures of spherical Gaussians (with known common covariance), PAC
learning one-hidden-layer ReLU networks with $k$ hidden units (under the
Gaussian distribution), density and parameter estimation for $k$-mixtures of
linear regressions (with Gaussian covariates), and parameter estimation for
$k$-mixtures of hyperplanes. Our algorithms run in time {\em quasi-polynomial}
in the parameter $k$. Previous algorithms for these problems had running times
exponential in $k^{\Omega(1)}$.
At a high-level our algorithms for all these learning problems work as
follows: By computing the low-degree moments of the hidden parameters, we are
able to find a vector space of polynomials that nearly vanish on the unknown
parameters. Our structural result allows us to compute a quasi-polynomial sized
cover for the set of hidden parameters, which we exploit in our learning
algorithms.
Related papers
- Efficient Statistics With Unknown Truncation, Polynomial Time Algorithms, Beyond Gaussians [7.04316974339151]
We study the estimation of distributional parameters when samples are shown only if they fall in some unknown set.
We develop tools that may be of independent interest, including a reduction from PAC learning with positive and unlabeled samples to PAC learning with positive and negative samples.
arXiv Detail & Related papers (2024-10-02T15:21:07Z) - Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit [75.4661041626338]
We study the problem of gradient descent learning of a single-index target function $f_*(boldsymbolx) = textstylesigma_*left(langleboldsymbolx,boldsymbolthetarangleright)$ under isotropic Gaussian data.
We prove that a two-layer neural network optimized by an SGD-based algorithm learns $f_*$ of arbitrary link function with a sample and runtime complexity of $n asymp T asymp C(q) cdot d
arXiv Detail & Related papers (2024-06-03T17:56:58Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Memory-Constrained Algorithms for Convex Optimization via Recursive
Cutting-Planes [23.94542304111204]
First class of algorithms that provides a positive trade-off between gradient descent and cutting-plane methods in any regime with $epsilonleq 1/sqrt d$.
In the regime $epsilon leq d-Omega(d)$, our algorithm with $p=d$ achieves the information-theoretic optimal memory usage and improves the oracle-complexity of gradient descent.
arXiv Detail & Related papers (2023-06-16T17:00:51Z) - Most Neural Networks Are Almost Learnable [52.40331776572531]
We show that for any fixed $epsilon>0$ and depth $i$, there is a poly-time algorithm that learns random Xavier networks of depth $i$.
The algorithm runs in time and sample complexity of $(bard)mathrmpoly(epsilon-1)$, where $bar d$ is the size of the network.
For some cases of sigmoid and ReLU-like activations the bound can be improved to $(bard)mathrmpolylog(eps
arXiv Detail & Related papers (2023-05-25T22:27:42Z) - List-Decodable Covariance Estimation [1.9290392443571387]
We give the first time algorithm for emphlist-decodable covariance estimation.
Our result implies the first-time emphexact algorithm for list-decodable linear regression and subspace recovery.
arXiv Detail & Related papers (2022-06-22T09:38:06Z) - TURF: A Two-factor, Universal, Robust, Fast Distribution Learning
Algorithm [64.13217062232874]
One of its most powerful and successful modalities approximates every distribution to an $ell$ distance essentially at most a constant times larger than its closest $t$-piece degree-$d_$.
We provide a method that estimates this number near-optimally, hence helps approach the best possible approximation.
arXiv Detail & Related papers (2022-02-15T03:49:28Z) - Robustly Learning any Clusterable Mixture of Gaussians [55.41573600814391]
We study the efficient learnability of high-dimensional Gaussian mixtures in the adversarial-robust setting.
We provide an algorithm that learns the components of an $epsilon$-corrupted $k$-mixture within information theoretically near-optimal error proofs of $tildeO(epsilon)$.
Our main technical contribution is a new robust identifiability proof clusters from a Gaussian mixture, which can be captured by the constant-degree Sum of Squares proof system.
arXiv Detail & Related papers (2020-05-13T16:44:12Z) - Learning Polynomials of Few Relevant Dimensions [12.122488725065388]
Polynomial regression is a basic primitive in learning and statistics.
We give an algorithm that learns the runtime within accuracy $epsilon$ with sample complexity that is roughly $N = O_r,d(n log2(1/epsilon) (log n)d)$ and $O_r,d(N n2)$.
arXiv Detail & Related papers (2020-04-28T18:00:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.