Learning GMMs with Nearly Optimal Robustness Guarantees
- URL: http://arxiv.org/abs/2104.09665v1
- Date: Mon, 19 Apr 2021 22:14:54 GMT
- Title: Learning GMMs with Nearly Optimal Robustness Guarantees
- Authors: Allen Liu, Ankur Moitra
- Abstract summary: robustness guarantee is optimal up to polylogarithmic factors.
At the heart of our algorithm is a new way to relax a system of equations which corresponds to solving an improper learning problem.
- Score: 25.313563663123354
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work we solve the problem of robustly learning a high-dimensional
Gaussian mixture model with $k$ components from $\epsilon$-corrupted samples up
to accuracy $\widetilde{O}(\epsilon)$ in total variation distance for any
constant $k$ and with mild assumptions on the mixture. This robustness
guarantee is optimal up to polylogarithmic factors. At the heart of our
algorithm is a new way to relax a system of polynomial equations which
corresponds to solving an improper learning problem where we are allowed to
output a Gaussian mixture model whose weights are low-degree polynomials.
Related papers
- Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs [56.237917407785545]
We consider the problem of learning an $varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators.
Key to our solution is a novel projection technique based on ideas from harmonic analysis.
Our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs.
arXiv Detail & Related papers (2024-05-10T09:58:47Z) - Learning general Gaussian mixtures with efficient score matching [16.06356123715737]
We study the problem of learning mixtures of $k$ Gaussians in $d$ dimensions.
We make no separation assumptions on the underlying mixture components.
We give an algorithm that draws $dmathrmpoly(k/varepsilon)$ samples from the target mixture, runs in sample-polynomial time, and constructs a sampler.
arXiv Detail & Related papers (2024-04-29T17:30:36Z) - Learning Mixtures of Gaussians Using Diffusion Models [9.118706387430883]
We give a new algorithm for learning mixtures of $k$ Gaussians to TV error.
Our approach is analytic and relies on the framework of diffusion models.
arXiv Detail & Related papers (2024-04-29T17:00:20Z) - Near-Optimal No-Regret Learning for Correlated Equilibria in
Multi-Player General-Sum Games [104.74734408204749]
We show that if all agents in a multi-player general-sum normal-form game employ Optimistic Multiplicative Weights Update (OMWU), the external regret of every player is $O(textrmpolylog(T))$ after $T$ repetitions of the game.
We extend their result from external regret to internal regret and swap regret, thereby establishing uncoupled learning dynamics that converge to an approximate correlated equilibrium.
arXiv Detail & Related papers (2021-11-11T01:19:53Z) - Robustly Learning Mixtures of $k$ Arbitrary Gaussians [47.40835932474677]
We give a-time algorithm for the problem of robustly estimating a mixture of $k$ arbitrary Gaussians in $mathbbRd$, for any fixed $k$, in the presence of a constant fraction of arbitrary corruptions.
Our main tools are an efficient emphpartial clustering algorithm that relies on the sum-of-squares method, and a novel tensor decomposition algorithm that allows errors in both Frobenius norm and low-rank terms.
arXiv Detail & Related papers (2020-12-03T17:54:03Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z) - Robust Learning of Mixtures of Gaussians [13.173307471333617]
We learn to error $poly(eps)$ in total variation when $X$ is an evenly weighted mixture of two arbitrary $d$-dimensional Gaussians.
In particular, if $X$ is an evenly weighted mixture of two arbitrary $d$-dimensional Gaussians, we devise a time algorithm that given access to samples from $X$ an $eps$-fraction of which have been adversarially corrupted.
arXiv Detail & Related papers (2020-07-12T05:15:50Z) - Robustly Learning any Clusterable Mixture of Gaussians [55.41573600814391]
We study the efficient learnability of high-dimensional Gaussian mixtures in the adversarial-robust setting.
We provide an algorithm that learns the components of an $epsilon$-corrupted $k$-mixture within information theoretically near-optimal error proofs of $tildeO(epsilon)$.
Our main technical contribution is a new robust identifiability proof clusters from a Gaussian mixture, which can be captured by the constant-degree Sum of Squares proof system.
arXiv Detail & Related papers (2020-05-13T16:44:12Z) - Learning Gaussian Graphical Models via Multiplicative Weights [54.252053139374205]
We adapt an algorithm of Klivans and Meka based on the method of multiplicative weight updates.
The algorithm enjoys a sample complexity bound that is qualitatively similar to others in the literature.
It has a low runtime $O(mp2)$ in the case of $m$ samples and $p$ nodes, and can trivially be implemented in an online manner.
arXiv Detail & Related papers (2020-02-20T10:50:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.