Self-training Converts Weak Learners to Strong Learners in Mixture
Models
- URL: http://arxiv.org/abs/2106.13805v1
- Date: Fri, 25 Jun 2021 17:59:16 GMT
- Title: Self-training Converts Weak Learners to Strong Learners in Mixture
Models
- Authors: Spencer Frei and Difan Zou and Zixiang Chen and Quanquan Gu
- Abstract summary: We show that a pseudolabeler $boldsymbolbeta_mathrmpl$ can achieve classification error at most $C_mathrmerr$.
We additionally show that by running gradient descent on the logistic loss one can obtain a pseudolabeler $boldsymbolbeta_mathrmpl$ with classification error $C_mathrmerr$ using only $O(d)$ labeled examples.
- Score: 86.7137362125503
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider a binary classification problem when the data comes from a
mixture of two isotropic distributions satisfying concentration and
anti-concentration properties enjoyed by log-concave distributions among
others. We show that there exists a universal constant $C_{\mathrm{err}}>0$
such that if a pseudolabeler $\boldsymbol{\beta}_{\mathrm{pl}}$ can achieve
classification error at most $C_{\mathrm{err}}$, then for any $\varepsilon>0$,
an iterative self-training algorithm initialized at $\boldsymbol{\beta}_0 :=
\boldsymbol{\beta}_{\mathrm{pl}}$ using pseudolabels $\hat y =
\mathrm{sgn}(\langle \boldsymbol{\beta}_t, \mathbf{x}\rangle)$ and using at
most $\tilde O(d/\varepsilon^2)$ unlabeled examples suffices to learn the
Bayes-optimal classifier up to $\varepsilon$ error, where $d$ is the ambient
dimension. That is, self-training converts weak learners to strong learners
using only unlabeled examples. We additionally show that by running gradient
descent on the logistic loss one can obtain a pseudolabeler
$\boldsymbol{\beta}_{\mathrm{pl}}$ with classification error $C_{\mathrm{err}}$
using only $O(d)$ labeled examples (i.e., independent of $\varepsilon$).
Together our results imply that mixture models can be learned to within
$\varepsilon$ of the Bayes-optimal accuracy using at most $O(d)$ labeled
examples and $\tilde O(d/\varepsilon^2)$ unlabeled examples by way of a
semi-supervised self-training algorithm.
Related papers
- Iterative thresholding for non-linear learning in the strong $\varepsilon$-contamination model [3.309767076331365]
We derive approximation bounds for learning single neuron models using thresholded descent.
We also study the linear regression problem, where $sigma(mathbfx) = mathbfx$.
arXiv Detail & Related papers (2024-09-05T16:59:56Z) - Fast $(1+\varepsilon)$-Approximation Algorithms for Binary Matrix
Factorization [54.29685789885059]
We introduce efficient $(1+varepsilon)$-approximation algorithms for the binary matrix factorization (BMF) problem.
The goal is to approximate $mathbfA$ as a product of low-rank factors.
Our techniques generalize to other common variants of the BMF problem.
arXiv Detail & Related papers (2023-06-02T18:55:27Z) - Learning a Single Neuron with Adversarial Label Noise via Gradient
Descent [50.659479930171585]
We study a function of the form $mathbfxmapstosigma(mathbfwcdotmathbfx)$ for monotone activations.
The goal of the learner is to output a hypothesis vector $mathbfw$ that $F(mathbbw)=C, epsilon$ with high probability.
arXiv Detail & Related papers (2022-06-17T17:55:43Z) - Coresets for Decision Trees of Signals [19.537354146654845]
We provide the first algorithm that outputs such a $(k,varepsilon)$-coreset for emphevery such matrix $D$.
This is by forging a link between decision trees from machine learning -- to partition trees in computational geometry.
arXiv Detail & Related papers (2021-10-07T05:49:55Z) - The Price of Tolerance in Distribution Testing [31.10049510641336]
We show the sample complexity to be [fracsqrtnvarepsilon2 + fracnlog n cdotmaxleftfracvarepsilon2, providing a smooth tradeoff between the two previously known cases.
We also provide a similar characterization for the problem of tolerant equivalence testing, where both $p$ and $q$ are unknown.
arXiv Detail & Related papers (2021-06-25T03:59:42Z) - Provable Robustness of Adversarial Training for Learning Halfspaces with
Noise [95.84614821570283]
We analyze the properties of adversarial learning adversarially robust halfspaces in the presence of label noise.
To the best of our knowledge, this is the first work to show that adversarial training prov yields classifiers in noise.
arXiv Detail & Related papers (2021-04-19T16:35:38Z) - Optimal Combination of Linear and Spectral Estimators for Generalized
Linear Models [59.015960528781115]
We show how to optimally combine $hatboldsymbol xrm L$ and $hatboldsymbol xrm s$.
In order to establish the limiting distribution of $(boldsymbol x, hatboldsymbol xrm L, hatboldsymbol xrm s)$, we design and analyze an Approximate Message Passing (AMP) algorithm.
arXiv Detail & Related papers (2020-08-07T18:20:05Z) - Robust Gaussian Covariance Estimation in Nearly-Matrix Multiplication
Time [14.990725929840892]
We show an algorithm that runs in time $widetildeO(T(N, d) log kappa / mathrmpoly (varepsilon))$, where $T(N, d)$ is the time it takes to multiply a $d times N$ matrix by its transpose.
Our runtime matches that of the fastest algorithm for covariance estimation without outliers, up to poly-logarithmic factors.
arXiv Detail & Related papers (2020-06-23T20:21:27Z) - Efficient Statistics for Sparse Graphical Models from Truncated Samples [19.205541380535397]
We focus on two fundamental and classical problems: (i) inference of sparse Gaussian graphical models and (ii) support recovery of sparse linear models.
For sparse linear regression, suppose samples $(bf x,y)$ are generated where $y = bf xtopOmega* + mathcalN(0,1)$ and $(bf x, y)$ is seen only if $y$ belongs to a truncation set $S subseteq mathbbRd$.
arXiv Detail & Related papers (2020-06-17T09:21:00Z) - Agnostic Learning of a Single Neuron with Gradient Descent [92.7662890047311]
We consider the problem of learning the best-fitting single neuron as measured by the expected square loss.
For the ReLU activation, our population risk guarantee is $O(mathsfOPT1/2)+epsilon$.
For the ReLU activation, our population risk guarantee is $O(mathsfOPT1/2)+epsilon$.
arXiv Detail & Related papers (2020-05-29T07:20:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.