Towards Understanding the Data Dependency of Mixup-style Training
- URL: http://arxiv.org/abs/2110.07647v1
- Date: Thu, 14 Oct 2021 18:13:57 GMT
- Title: Towards Understanding the Data Dependency of Mixup-style Training
- Authors: Muthu Chidambaram, Xiang Wang, Yuzheng Hu, Chenwei Wu, Rong Ge
- Abstract summary: In the Mixup training paradigm, a model is trained using convex combinations of data points and their associated labels.
Despite seeing very few true data points during training, models trained using Mixup seem to still minimize the original empirical risk.
For a large class of linear models and linearly separable datasets, Mixup training leads to learning the same classifier as standard training.
- Score: 14.803285140800542
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the Mixup training paradigm, a model is trained using convex combinations
of data points and their associated labels. Despite seeing very few true data
points during training, models trained using Mixup seem to still minimize the
original empirical risk and exhibit better generalization and robustness on
various tasks when compared to standard training. In this paper, we investigate
how these benefits of Mixup training rely on properties of the data in the
context of classification. For minimizing the original empirical risk, we
compute a closed form for the Mixup-optimal classification, which allows us to
construct a simple dataset on which minimizing the Mixup loss can provably lead
to learning a classifier that does not minimize the empirical loss on the data.
On the other hand, we also give sufficient conditions for Mixup training to
also minimize the original empirical risk. For generalization, we characterize
the margin of a Mixup classifier, and use this to understand why the decision
boundary of a Mixup classifier can adapt better to the full structure of the
training data when compared to standard training. In contrast, we also show
that, for a large class of linear models and linearly separable datasets, Mixup
training leads to learning the same classifier as standard training.
Related papers
- Scalable Data Ablation Approximations for Language Models through Modular Training and Merging [27.445079398772904]
We propose an efficient method for approximating data ablations which trains individual models on subsets of a training corpus.
We find that, given an arbitrary evaluation set, the perplexity score of a single model trained on a candidate set of data is strongly correlated with perplexity scores of parameter averages of models trained on distinct partitions of that data.
arXiv Detail & Related papers (2024-10-21T06:03:49Z) - RC-Mixup: A Data Augmentation Strategy against Noisy Data for Regression Tasks [27.247270530020664]
We study the problem of robust data augmentation for regression tasks in the presence of noisy data.
C-Mixup is more selective in which samples to mix based on their label distances for better regression performance.
We propose RC-Mixup, which tightly integrates C-Mixup with multi-round robust training methods for a synergistic effect.
arXiv Detail & Related papers (2024-05-28T08:02:42Z) - Data Mixing Laws: Optimizing Data Mixtures by Predicting Language Modeling Performance [55.872926690722714]
We study the predictability of model performance regarding the mixture proportions in function forms.
We propose nested use of the scaling laws of training steps, model sizes, and our data mixing law.
Our method effectively optimize the training mixture of a 1B model trained for 100B tokens in RedPajama.
arXiv Detail & Related papers (2024-03-25T17:14:00Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Efficient Online Data Mixing For Language Model Pre-Training [101.45242332613944]
Existing data selection methods suffer from slow and computationally expensive processes.
Data mixing, on the other hand, reduces the complexity of data selection by grouping data points together.
We develop an efficient algorithm for Online Data Mixing (ODM) that combines elements from both data selection and data mixing.
arXiv Detail & Related papers (2023-12-05T00:42:35Z) - Self-Evolution Learning for Mixup: Enhance Data Augmentation on Few-Shot
Text Classification Tasks [75.42002070547267]
We propose a self evolution learning (SE) based mixup approach for data augmentation in text classification.
We introduce a novel instance specific label smoothing approach, which linearly interpolates the model's output and one hot labels of the original samples to generate new soft for label mixing up.
arXiv Detail & Related papers (2023-05-22T23:43:23Z) - The Benefits of Mixup for Feature Learning [117.93273337740442]
We first show that Mixup using different linear parameters for features and labels can still achieve similar performance to standard Mixup.
We consider a feature-noise data model and show that Mixup training can effectively learn the rare features from its mixture with the common features.
In contrast, standard training can only learn the common features but fails to learn the rare features, thus suffering from bad performance.
arXiv Detail & Related papers (2023-03-15T08:11:47Z) - Over-training with Mixup May Hurt Generalization [32.64382185990981]
We report a previously unobserved phenomenon in Mixup training.
On a number of standard datasets, the performance of Mixup-trained models starts to decay after training for a large number of epochs.
We show theoretically that Mixup training may introduce undesired data-dependent label noises to the synthesized data.
arXiv Detail & Related papers (2023-03-02T18:37:34Z) - Provably Learning Diverse Features in Multi-View Data with Midpoint Mixup [14.37428912254029]
Mixup is a data augmentation technique that relies on training using random convex combinations of data points and their labels.
We focus on classification problems in which each class may have multiple associated features (or views) that can be used to predict the class correctly.
Our main theoretical results demonstrate that, for a non-trivial class of data distributions with two features per class, training a 2-layer convolutional network using empirical risk minimization can lead to learning only one feature for almost all classes while training with a specific instantiation of Mixup succeeds in learning both features for every class.
arXiv Detail & Related papers (2022-10-24T18:11:37Z) - BiFair: Training Fair Models with Bilevel Optimization [8.2509884277533]
We develop a new training algorithm, named BiFair, which jointly minimizes for a utility, and a fairness loss of interest.
Our algorithm consistently performs better, i.e., we reach to better values of a given fairness metric under same, or higher accuracy.
arXiv Detail & Related papers (2021-06-03T22:36:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.