SQ Lower Bounds for Learning Mixtures of Linear Classifiers
- URL: http://arxiv.org/abs/2310.11876v1
- Date: Wed, 18 Oct 2023 10:56:57 GMT
- Title: SQ Lower Bounds for Learning Mixtures of Linear Classifiers
- Authors: Ilias Diakonikolas, Daniel M. Kane and Yuxin Sun
- Abstract summary: We show that known algorithms for this problem are essentially best possible, even for the special case of uniform mixtures.
The key technical ingredient is a new construction of spherical designs that may be of independent interest.
- Score: 43.63696593768504
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of learning mixtures of linear classifiers under
Gaussian covariates. Given sample access to a mixture of $r$ distributions on
$\mathbb{R}^n$ of the form $(\mathbf{x},y_{\ell})$, $\ell\in [r]$, where
$\mathbf{x}\sim\mathcal{N}(0,\mathbf{I}_n)$ and
$y_\ell=\mathrm{sign}(\langle\mathbf{v}_\ell,\mathbf{x}\rangle)$ for an unknown
unit vector $\mathbf{v}_\ell$, the goal is to learn the underlying distribution
in total variation distance. Our main result is a Statistical Query (SQ) lower
bound suggesting that known algorithms for this problem are essentially best
possible, even for the special case of uniform mixtures. In particular, we show
that the complexity of any SQ algorithm for the problem is
$n^{\mathrm{poly}(1/\Delta) \log(r)}$, where $\Delta$ is a lower bound on the
pairwise $\ell_2$-separation between the $\mathbf{v}_\ell$'s. The key technical
ingredient underlying our result is a new construction of spherical designs
that may be of independent interest.
Related papers
- The Communication Complexity of Approximating Matrix Rank [50.6867896228563]
We show that this problem has randomized communication complexity $Omega(frac1kcdot n2log|mathbbF|)$.
As an application, we obtain an $Omega(frac1kcdot n2log|mathbbF|)$ space lower bound for any streaming algorithm with $k$ passes.
arXiv Detail & Related papers (2024-10-26T06:21:42Z) - Statistical Query Lower Bounds for Learning Truncated Gaussians [43.452452030671694]
We show that the complexity of any SQ algorithm for this problem is $dmathrmpoly (1/epsilon)$, even when the class $mathcalC$ is simple so that $mathrmpoly(d/epsilon) samples information-theoretically suffice.
arXiv Detail & Related papers (2024-03-04T18:30:33Z) - Provably learning a multi-head attention layer [55.2904547651831]
Multi-head attention layer is one of the key components of the transformer architecture that sets it apart from traditional feed-forward models.
In this work, we initiate the study of provably learning a multi-head attention layer from random examples.
We prove computational lower bounds showing that in the worst case, exponential dependence on $m$ is unavoidable.
arXiv Detail & Related papers (2024-02-06T15:39:09Z) - Quantum Oblivious LWE Sampling and Insecurity of Standard Model Lattice-Based SNARKs [4.130591018565202]
The Learning Errors With Errors ($mathsfLWE$) problem asks to find $mathbfs$ from an input of the form $(mathbfAmathbfs+mathbfe$)
We do not focus on solving $mathsfLWE$ but on the task of sampling instances.
Our main result is a quantum-time algorithm that samples well-distributed $mathsfLWE$ instances while provably not knowing the solution.
arXiv Detail & Related papers (2024-01-08T10:55:41Z) - Testing Closeness of Multivariate Distributions via Ramsey Theory [40.926523210945064]
We investigate the statistical task of closeness (or equivalence) testing for multidimensional distributions.
Specifically, given sample access to two unknown distributions $mathbf p, mathbf q$ on $mathbb Rd$, we want to distinguish between the case that $mathbf p=mathbf q$ versus $|mathbf p-mathbf q|_A_k > epsilon$.
Our main result is the first closeness tester for this problem with em sub-learning sample complexity in any fixed dimension.
arXiv Detail & Related papers (2023-11-22T04:34:09Z) - Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - SQ Lower Bounds for Learning Bounded Covariance GMMs [46.289382906761304]
We focus on learning mixtures of separated Gaussians on $mathbbRd$ of the form $P= sum_i=1k w_i mathcalN(boldsymbol mu_i,mathbf Sigma_i)$.
We prove that any Statistical Query (SQ) algorithm for this problem requires complexity at least $dOmega (1/epsilon)$.
arXiv Detail & Related papers (2023-06-22T17:23:36Z) - Learning a Single Neuron with Adversarial Label Noise via Gradient
Descent [50.659479930171585]
We study a function of the form $mathbfxmapstosigma(mathbfwcdotmathbfx)$ for monotone activations.
The goal of the learner is to output a hypothesis vector $mathbfw$ that $F(mathbbw)=C, epsilon$ with high probability.
arXiv Detail & Related papers (2022-06-17T17:55:43Z) - Threshold Phenomena in Learning Halfspaces with Massart Noise [56.01192577666607]
We study the problem of PAC learning halfspaces on $mathbbRd$ with Massart noise under Gaussian marginals.
Our results qualitatively characterize the complexity of learning halfspaces in the Massart model.
arXiv Detail & Related papers (2021-08-19T16:16:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.