Mixup Augmentation with Multiple Interpolations
- URL: http://arxiv.org/abs/2406.01417v1
- Date: Mon, 3 Jun 2024 15:16:09 GMT
- Title: Mixup Augmentation with Multiple Interpolations
- Authors: Lifeng Shen, Jincheng Yu, Hansi Yang, James T. Kwok,
- Abstract summary: We propose a simple yet effective extension called multi-mix, which generates multiple gradients from a sample pair.
With an ordered sequence of generated samples, multi-mix can better guide the training process than standard mixup.
- Score: 26.46413903248954
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mixup and its variants form a popular class of data augmentation techniques.Using a random sample pair, it generates a new sample by linear interpolation of the inputs and labels. However, generating only one single interpolation may limit its augmentation ability. In this paper, we propose a simple yet effective extension called multi-mix, which generates multiple interpolations from a sample pair. With an ordered sequence of generated samples, multi-mix can better guide the training process than standard mixup. Moreover, theoretically, this can also reduce the stochastic gradient variance. Extensive experiments on a number of synthetic and large-scale data sets demonstrate that multi-mix outperforms various mixup variants and non-mixup-based baselines in terms of generalization, robustness, and calibration.
Related papers
- PowMix: A Versatile Regularizer for Multimodal Sentiment Analysis [71.8946280170493]
This paper introduces PowMix, a versatile embedding space regularizer that builds upon the strengths of unimodal mixing-based regularization approaches.
PowMix is integrated before the fusion stage of multimodal architectures and facilitates intra-modal mixing, such as mixing text with text, to act as a regularizer.
arXiv Detail & Related papers (2023-12-19T17:01:58Z) - Adversarial AutoMixup [50.1874436169571]
We propose AdAutomixup, an adversarial automatic mixup augmentation approach.
It generates challenging samples to train a robust classifier for image classification.
Our approach outperforms the state of the art in various classification scenarios.
arXiv Detail & Related papers (2023-12-19T08:55:00Z) - Improving Gradient-guided Nested Sampling for Posterior Inference [47.08481529384556]
We present a performant, general-purpose gradient-guided nested sampling algorithm, $tt GGNS$.
We show the potential of combining nested sampling with generative flow networks to obtain large amounts of high-quality samples from the posterior distribution.
arXiv Detail & Related papers (2023-12-06T21:09:18Z) - Infinite Class Mixup [26.48101652432502]
Mixup is a strategy for training deep networks where additional samples are augmented by interpolating inputs and labels of training pairs.
This paper seeks to address this cornerstone by mixing the classifiers directly instead of mixing the labels for each mixed pair.
We show that Infinite Class Mixup outperforms standard Mixup and variants such as RegMixup and Remix on balanced, long-tailed, and data-constrained benchmarks.
arXiv Detail & Related papers (2023-05-17T15:27:35Z) - MixupE: Understanding and Improving Mixup from Directional Derivative
Perspective [86.06981860668424]
We propose an improved version of Mixup, theoretically justified to deliver better generalization performance than the vanilla Mixup.
Our results show that the proposed method improves Mixup across multiple datasets using a variety of architectures.
arXiv Detail & Related papers (2022-12-27T07:03:52Z) - C-Mixup: Improving Generalization in Regression [71.10418219781575]
Mixup algorithm improves generalization by linearly interpolating a pair of examples and their corresponding labels.
We propose C-Mixup, which adjusts the sampling probability based on the similarity of the labels.
C-Mixup achieves 6.56%, 4.76%, 5.82% improvements in in-distribution generalization, task generalization, and out-of-distribution robustness, respectively.
arXiv Detail & Related papers (2022-10-11T20:39:38Z) - Global Mixup: Eliminating Ambiguity with Clustering [18.876583942942144]
We propose a novel augmentation method based on global clustering relationships named textbfGlobal Mixup.
Experiments show that Global Mixup significantly outperforms previous state-of-the-art baselines.
arXiv Detail & Related papers (2022-06-06T16:42:22Z) - Multi-Sample $\zeta$-mixup: Richer, More Realistic Synthetic Samples
from a $p$-Series Interpolant [16.65329510916639]
We propose $zeta$-mixup, a generalization of mixup with provably and demonstrably desirable properties.
We show that our implementation of $zeta$-mixup is faster than mixup, and extensive evaluation on controlled synthetic and 24 real-world natural and medical image classification datasets shows that $zeta$-mixup outperforms mixup and traditional data augmentation techniques.
arXiv Detail & Related papers (2022-04-07T09:41:09Z) - Harnessing Hard Mixed Samples with Decoupled Regularizer [69.98746081734441]
Mixup is an efficient data augmentation approach that improves the generalization of neural networks by smoothing the decision boundary with mixed data.
In this paper, we propose an efficient mixup objective function with a decoupled regularizer named Decoupled Mixup (DM)
DM can adaptively utilize hard mixed samples to mine discriminative features without losing the original smoothness of mixup.
arXiv Detail & Related papers (2022-03-21T07:12:18Z) - Learning Mixtures of Permutations: Groups of Pairwise Comparisons and
Combinatorial Method of Moments [8.691957530860675]
We study the widely used Mallows mixture model.
In the high-dimensional setting, we propose an optimal-time algorithm that learns a Mallows mixture of permutations on $n$ elements.
arXiv Detail & Related papers (2020-09-14T23:11:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.