TURF: A Two-factor, Universal, Robust, Fast Distribution Learning
Algorithm
- URL: http://arxiv.org/abs/2202.07172v1
- Date: Tue, 15 Feb 2022 03:49:28 GMT
- Title: TURF: A Two-factor, Universal, Robust, Fast Distribution Learning
Algorithm
- Authors: Yi Hao, Ayush Jain, Alon Orlitsky, Vaishakh Ravindrakumar
- Abstract summary: One of its most powerful and successful modalities approximates every distribution to an $ell$ distance essentially at most a constant times larger than its closest $t$-piece degree-$d_$.
We provide a method that estimates this number near-optimally, hence helps approach the best possible approximation.
- Score: 64.13217062232874
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Approximating distributions from their samples is a canonical
statistical-learning problem. One of its most powerful and successful
modalities approximates every distribution to an $\ell_1$ distance essentially
at most a constant times larger than its closest $t$-piece degree-$d$
polynomial, where $t\ge1$ and $d\ge0$. Letting $c_{t,d}$ denote the smallest
such factor, clearly $c_{1,0}=1$, and it can be shown that $c_{t,d}\ge 2$ for
all other $t$ and $d$. Yet current computationally efficient algorithms show
only $c_{t,1}\le 2.25$ and the bound rises quickly to $c_{t,d}\le 3$ for $d\ge
9$. We derive a near-linear-time and essentially sample-optimal estimator that
establishes $c_{t,d}=2$ for all $(t,d)\ne(1,0)$. Additionally, for many
practical distributions, the lowest approximation distance is achieved by
polynomials with vastly varying number of pieces. We provide a method that
estimates this number near-optimally, hence helps approach the best possible
approximation. Experiments combining the two techniques confirm improved
performance over existing methodologies.
Related papers
- Statistical-Computational Trade-offs for Density Estimation [60.81548752871115]
We show that for a broad class of data structures their bounds cannot be significantly improved.
This is a novel emphstatistical-computational trade-off for density estimation.
arXiv Detail & Related papers (2024-10-30T15:03:33Z) - Testing Support Size More Efficiently Than Learning Histograms [0.18416014644193066]
We show that testing can be done more efficiently than learning the histogram of the distribution $p$.
The proof relies on an analysis of Chebyshev approximations outside the range where they are designed to be good approximations.
arXiv Detail & Related papers (2024-10-24T17:05:34Z) - Robust Distribution Learning with Local and Global Adversarial Corruptions [17.22168727622332]
We develop an efficient finite-sample algorithm with error bounded by $sqrtvarepsilon k + rho + tildeO(dsqrtkn-1/(k lor 2))$ when $P$ has bounded covariance.
Our efficient procedure relies on a novel trace norm approximation of an ideal yet intractable 2-Wasserstein projection estimator.
arXiv Detail & Related papers (2024-06-10T17:48:36Z) - Statistically Near-Optimal Hypothesis Selection [33.83129262033921]
We derive an optimal $2$-approximation learning strategy for the Hypothesis Selection problem.
This is the first algorithm that simultaneously achieves the best approximation factor and sample complexity.
arXiv Detail & Related papers (2021-08-17T21:11:20Z) - Clustering Mixture Models in Almost-Linear Time via List-Decodable Mean
Estimation [58.24280149662003]
We study the problem of list-decodable mean estimation, where an adversary can corrupt a majority of the dataset.
We develop new algorithms for list-decodable mean estimation, achieving nearly-optimal statistical guarantees.
arXiv Detail & Related papers (2021-06-16T03:34:14Z) - Small Covers for Near-Zero Sets of Polynomials and Learning Latent
Variable Models [56.98280399449707]
We show that there exists an $epsilon$-cover for $S$ of cardinality $M = (k/epsilon)O_d(k1/d)$.
Building on our structural result, we obtain significantly improved learning algorithms for several fundamental high-dimensional probabilistic models hidden variables.
arXiv Detail & Related papers (2020-12-14T18:14:08Z) - List-Decodable Mean Estimation in Nearly-PCA Time [50.79691056481693]
We study the fundamental task of list-decodable mean estimation in high dimensions.
Our algorithm runs in time $widetildeO(ndk)$ for all $k = O(sqrtd) cup Omega(d)$, where $n$ is the size of the dataset.
A variant of our algorithm has runtime $widetildeO(ndk)$ for all $k$, at the expense of an $O(sqrtlog k)$ factor in the recovery guarantee
arXiv Detail & Related papers (2020-11-19T17:21:37Z) - Convergence of Sparse Variational Inference in Gaussian Processes
Regression [29.636483122130027]
We show that a method with an overall computational cost of $mathcalO(log N)2D(loglog N)2)$ can be used to perform inference.
arXiv Detail & Related papers (2020-08-01T19:23:34Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z) - Efficient Distance Approximation for Structured High-Dimensional
Distributions via Learning [31.552581550603005]
We design efficient distance approximation algorithms for several classes of structured high-dimensional distributions.
Our results are the first efficient distance approximation algorithms for these well-studied problems.
arXiv Detail & Related papers (2020-02-13T07:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.