Theoretical Analysis of Self-Training with Deep Networks on Unlabeled
Data
- URL: http://arxiv.org/abs/2010.03622v5
- Date: Wed, 20 Apr 2022 22:07:18 GMT
- Title: Theoretical Analysis of Self-Training with Deep Networks on Unlabeled
Data
- Authors: Colin Wei, Kendrick Shen, Yining Chen, Tengyu Ma
- Abstract summary: Self-training algorithms have been very successful for learning with unlabeled data using neural networks.
This work provides a unified theoretical analysis of self-training with deep networks for semi-supervised learning, unsupervised domain adaptation, and unsupervised learning.
- Score: 48.4779912667317
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-training algorithms, which train a model to fit pseudolabels predicted
by another previously-learned model, have been very successful for learning
with unlabeled data using neural networks. However, the current theoretical
understanding of self-training only applies to linear models. This work
provides a unified theoretical analysis of self-training with deep networks for
semi-supervised learning, unsupervised domain adaptation, and unsupervised
learning. At the core of our analysis is a simple but realistic "expansion"
assumption, which states that a low probability subset of the data must expand
to a neighborhood with large probability relative to the subset. We also assume
that neighborhoods of examples in different classes have minimal overlap. We
prove that under these assumptions, the minimizers of population objectives
based on self-training and input-consistency regularization will achieve high
accuracy with respect to ground-truth labels. By using off-the-shelf
generalization bounds, we immediately convert this result to sample complexity
guarantees for neural nets that are polynomial in the margin and Lipschitzness.
Our results help explain the empirical successes of recently proposed
self-training algorithms which use input consistency regularization.
Related papers
- Neural networks trained with SGD learn distributions of increasing
complexity [78.30235086565388]
We show that neural networks trained using gradient descent initially classify their inputs using lower-order input statistics.
We then exploit higher-order statistics only later during training.
We discuss the relation of DSB to other simplicity biases and consider its implications for the principle of universality in learning.
arXiv Detail & Related papers (2022-11-21T15:27:22Z) - Robust Training under Label Noise by Over-parameterization [41.03008228953627]
We propose a principled approach for robust training of over-parameterized deep networks in classification tasks where a proportion of training labels are corrupted.
The main idea is yet very simple: label noise is sparse and incoherent with the network learned from clean data, so we model the noise and learn to separate it from the data.
Remarkably, when trained using such a simple method in practice, we demonstrate state-of-the-art test accuracy against label noise on a variety of real datasets.
arXiv Detail & Related papers (2022-02-28T18:50:10Z) - Benign Overfitting without Linearity: Neural Network Classifiers Trained
by Gradient Descent for Noisy Linear Data [44.431266188350655]
We consider the generalization error of two-layer neural networks trained to generalize by gradient descent.
We show that neural networks exhibit benign overfitting: they can be driven to zero training error, perfectly fitting any noisy training labels, and simultaneously achieve minimax optimal test error.
In contrast to previous work on benign overfitting that require linear or kernel-based predictors, our analysis holds in a setting where both the model and learning dynamics are fundamentally nonlinear.
arXiv Detail & Related papers (2022-02-11T23:04:00Z) - Discovering Invariant Rationales for Graph Neural Networks [104.61908788639052]
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features.
We propose a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs.
arXiv Detail & Related papers (2022-01-30T16:43:40Z) - How does unlabeled data improve generalization in self-training? A
one-hidden-layer theoretical analysis [93.37576644429578]
This work establishes the first theoretical analysis for the known iterative self-training paradigm.
We prove the benefits of unlabeled data in both training convergence and generalization ability.
Experiments from shallow neural networks to deep neural networks are also provided to justify the correctness of our established theoretical insights on self-training.
arXiv Detail & Related papers (2022-01-21T02:16:52Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - The Gaussian equivalence of generative models for learning with shallow
neural networks [30.47878306277163]
We study the performance of neural networks trained on data drawn from pre-trained generative models.
We provide three strands of rigorous, analytical and numerical evidence corroborating this equivalence.
These results open a viable path to the theoretical study of machine learning models with realistic data.
arXiv Detail & Related papers (2020-06-25T21:20:09Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - Statistical and Algorithmic Insights for Semi-supervised Learning with
Self-training [30.866440916522826]
Self-training is a classical approach in semi-supervised learning.
We show that self-training iterations gracefully improve the model accuracy even if they do get stuck in sub-optimal fixed points.
We then establish a connection between self-training based semi-supervision and the more general problem of learning with heterogenous data.
arXiv Detail & Related papers (2020-06-19T08:09:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.