Characterizing Structural Regularities of Labeled Data in
Overparameterized Models
- URL: http://arxiv.org/abs/2002.03206v3
- Date: Tue, 15 Jun 2021 17:22:27 GMT
- Title: Characterizing Structural Regularities of Labeled Data in
Overparameterized Models
- Authors: Ziheng Jiang, Chiyuan Zhang, Kunal Talwar, Michael C. Mozer
- Abstract summary: Deep neural networks can generalize across instances that share common patterns or structures.
We analyze how individual instances are treated by a model via a consistency score.
We show examples of potential applications to the analysis of deep-learning systems.
- Score: 45.956614301397885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans are accustomed to environments that contain both regularities and
exceptions. For example, at most gas stations, one pays prior to pumping, but
the occasional rural station does not accept payment in advance. Likewise, deep
neural networks can generalize across instances that share common patterns or
structures, yet have the capacity to memorize rare or irregular forms. We
analyze how individual instances are treated by a model via a consistency
score. The score characterizes the expected accuracy for a held-out instance
given training sets of varying size sampled from the data distribution. We
obtain empirical estimates of this score for individual instances in multiple
data sets, and we show that the score identifies out-of-distribution and
mislabeled examples at one end of the continuum and strongly regular examples
at the other end. We identify computationally inexpensive proxies to the
consistency score using statistics collected during training. We show examples
of potential applications to the analysis of deep-learning systems.
Related papers
- Semi-supervised Learning For Robust Speech Evaluation [30.593420641501968]
Speech evaluation measures a learners oral proficiency using automatic models.
This paper proposes to address such challenges by exploiting semi-supervised pre-training and objective regularization.
An anchor model is trained using pseudo labels to predict the correctness of pronunciation.
arXiv Detail & Related papers (2024-09-23T02:11:24Z) - Distributional bias compromises leave-one-out cross-validation [0.6656737591902598]
Cross-validation is a common method for estimating the predictive performance of machine learning models.
We show that an approach called "leave-one-out cross-validation" creates a negative correlation between the average label of each training fold and the label of its corresponding test instance.
We propose a generalizable rebalanced cross-validation approach that corrects for distributional bias.
arXiv Detail & Related papers (2024-06-03T15:47:34Z) - Data Valuation Without Training of a Model [8.89493507314525]
We propose a training-free data valuation score, called complexity-gap score, to quantify the influence of individual instances in generalization of neural networks.
The proposed score can quantify irregularity of the instances and measure how much each data instance contributes in the total movement of the network parameters during training.
arXiv Detail & Related papers (2023-01-03T02:19:20Z) - A Statistical Model for Predicting Generalization in Few-Shot
Classification [6.158812834002346]
We introduce a Gaussian model of the feature distribution to predict the generalization error.
We show that our approach outperforms alternatives such as the leave-one-out cross-validation strategy.
arXiv Detail & Related papers (2022-12-13T10:21:15Z) - Approximate sampling and estimation of partition functions using neural
networks [0.0]
We show how variational autoencoders (VAEs) can be applied to this task.
We invert the logic and train the VAE to fit a simple and tractable distribution, on the assumption of a complex and intractable latent distribution, specified up to normalization.
This procedure constructs approximations without the use of training data or Markov chain Monte Carlo sampling.
arXiv Detail & Related papers (2022-09-21T15:16:45Z) - Equivariance Allows Handling Multiple Nuisance Variables When Analyzing
Pooled Neuroimaging Datasets [53.34152466646884]
In this paper, we show how bringing recent results on equivariant representation learning instantiated on structured spaces together with simple use of classical results on causal inference provides an effective practical solution.
We demonstrate how our model allows dealing with more than one nuisance variable under some assumptions and can enable analysis of pooled scientific datasets in scenarios that would otherwise entail removing a large portion of the samples.
arXiv Detail & Related papers (2022-03-29T04:54:06Z) - An Empirical Comparison of Instance Attribution Methods for NLP [62.63504976810927]
We evaluate the degree to which different potential instance attribution agree with respect to the importance of training samples.
We find that simple retrieval methods yield training instances that differ from those identified via gradient-based methods.
arXiv Detail & Related papers (2021-04-09T01:03:17Z) - Estimating informativeness of samples with Smooth Unique Information [108.25192785062367]
We measure how much a sample informs the final weights and how much it informs the function computed by the weights.
We give efficient approximations of these quantities using a linearized network.
We apply these measures to several problems, such as dataset summarization.
arXiv Detail & Related papers (2021-01-17T10:29:29Z) - One for More: Selecting Generalizable Samples for Generalizable ReID
Model [92.40951770273972]
This paper proposes a one-for-more training objective that takes the generalization ability of selected samples as a loss function.
Our proposed one-for-more based sampler can be seamlessly integrated into the ReID training framework.
arXiv Detail & Related papers (2020-12-10T06:37:09Z) - Robust and On-the-fly Dataset Denoising for Image Classification [72.10311040730815]
On-the-fly Data Denoising (ODD) is robust to mislabeled examples, while introducing almost zero computational overhead compared to standard training.
ODD is able to achieve state-of-the-art results on a wide range of datasets including real-world ones such as WebVision and Clothing1M.
arXiv Detail & Related papers (2020-03-24T03:59:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.