Contrastive Moments: Unsupervised Halfspace Learning in Polynomial Time
- URL: http://arxiv.org/abs/2311.01435v1
- Date: Thu, 2 Nov 2023 17:51:10 GMT
- Title: Contrastive Moments: Unsupervised Halfspace Learning in Polynomial Time
- Authors: Xinyuan Cao, Santosh S. Vempala
- Abstract summary: We give a Gaussian-time algorithm for learning high-dimensional halfspaces with margins in $d$-dimensional space to within desired TV distance.
Notably, our algorithm does not need labels and establishes unique (and efficient) identifiability of the hidden halfspace.
We improve on this by providing polytime guarantees based on Total Variation (TV) distance, in place of existing moment-bound guarantees that can be super-polynomial.
- Score: 8.419603619167813
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We give a polynomial-time algorithm for learning high-dimensional halfspaces
with margins in $d$-dimensional space to within desired TV distance when the
ambient distribution is an unknown affine transformation of the $d$-fold
product of an (unknown) symmetric one-dimensional logconcave distribution, and
the halfspace is introduced by deleting at least an $\epsilon$ fraction of the
data in one of the component distributions. Notably, our algorithm does not
need labels and establishes the unique (and efficient) identifiability of the
hidden halfspace under this distributional assumption. The sample and time
complexity of the algorithm are polynomial in the dimension and $1/\epsilon$.
The algorithm uses only the first two moments of suitable re-weightings of the
empirical distribution, which we call contrastive moments; its analysis uses
classical facts about generalized Dirichlet polynomials and relies crucially on
a new monotonicity property of the moment ratio of truncations of logconcave
distributions. Such algorithms, based only on first and second moments were
suggested in earlier work, but hitherto eluded rigorous guarantees.
Prior work addressed the special case when the underlying distribution is
Gaussian via Non-Gaussian Component Analysis. We improve on this by providing
polytime guarantees based on Total Variation (TV) distance, in place of
existing moment-bound guarantees that can be super-polynomial. Our work is also
the first to go beyond Gaussians in this setting.
Related papers
- Learning general Gaussian mixtures with efficient score matching [16.06356123715737]
We study the problem of learning mixtures of $k$ Gaussians in $d$ dimensions.
We make no separation assumptions on the underlying mixture components.
We give an algorithm that draws $dmathrmpoly(k/varepsilon)$ samples from the target mixture, runs in sample-polynomial time, and constructs a sampler.
arXiv Detail & Related papers (2024-04-29T17:30:36Z) - Sampling and estimation on manifolds using the Langevin diffusion [45.57801520690309]
Two estimators of linear functionals of $mu_phi $ based on the discretized Markov process are considered.
Error bounds are derived for sampling and estimation using a discretization of an intrinsically defined Langevin diffusion.
arXiv Detail & Related papers (2023-12-22T18:01:11Z) - Robust Estimation for Nonparametric Families via Generative Adversarial
Networks [92.64483100338724]
We provide a framework for designing Generative Adversarial Networks (GANs) to solve high dimensional robust statistics problems.
Our work extend these to robust mean estimation, second moment estimation, and robust linear regression.
In terms of techniques, our proposed GAN losses can be viewed as a smoothed and generalized Kolmogorov-Smirnov distance.
arXiv Detail & Related papers (2022-02-02T20:11:33Z) - A Polynomial Time Algorithm for Learning Halfspaces with Tsybakov Noise [55.45544638410858]
We study the problem of PAC learning homogeneous halfspaces in the presence of Tsybakov noise.
Our algorithm learns the true halfspace within any accuracy $epsilon$.
arXiv Detail & Related papers (2020-10-04T22:19:06Z) - Primal Dual Interpretation of the Proximal Stochastic Gradient Langevin
Algorithm [11.80267432402723]
We consider the task of sampling with respect to a log concave probability distribution.
The target distribution can be seen as a minimizer of the Kullback-Leibler divergence defined on the Wasserstein space.
We show that if the potential is strongly convex, the complexity of PSGLA is $O (1/varepsilon2)$ in terms of the 2-Wasserstein distance.
arXiv Detail & Related papers (2020-06-16T15:57:09Z) - Learning Halfspaces with Tsybakov Noise [50.659479930171585]
We study the learnability of halfspaces in the presence of Tsybakov noise.
We give an algorithm that achieves misclassification error $epsilon$ with respect to the true halfspace.
arXiv Detail & Related papers (2020-06-11T14:25:02Z) - Robustly Learning any Clusterable Mixture of Gaussians [55.41573600814391]
We study the efficient learnability of high-dimensional Gaussian mixtures in the adversarial-robust setting.
We provide an algorithm that learns the components of an $epsilon$-corrupted $k$-mixture within information theoretically near-optimal error proofs of $tildeO(epsilon)$.
Our main technical contribution is a new robust identifiability proof clusters from a Gaussian mixture, which can be captured by the constant-degree Sum of Squares proof system.
arXiv Detail & Related papers (2020-05-13T16:44:12Z) - Outlier-Robust Clustering of Non-Spherical Mixtures [5.863264019032882]
We give the first outlier-robust efficient algorithm for clustering a mixture of $k$ statistically separated d-dimensional Gaussians (k-GMMs)
Our results extend to clustering mixtures of arbitrary affine transforms of the uniform distribution on the $d$-dimensional unit sphere.
arXiv Detail & Related papers (2020-05-06T17:24:27Z) - SURF: A Simple, Universal, Robust, Fast Distribution Learning Algorithm [64.13217062232874]
SURF is an algorithm for approximating distributions by piecewises.
It outperforms state-of-the-art algorithms in experiments.
arXiv Detail & Related papers (2020-02-22T01:03:33Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.