OpenMixup: A Comprehensive Mixup Benchmark for Visual Classification
- URL: http://arxiv.org/abs/2209.04851v2
- Date: Sun, 1 Oct 2023 21:31:09 GMT
- Title: OpenMixup: A Comprehensive Mixup Benchmark for Visual Classification
- Authors: Siyuan Li, Zedong Wang, Zicheng Liu, Di Wu, Cheng Tan, Weiyang Jin,
Stan Z. Li
- Abstract summary: We present OpenMixup, the first comprehensive mixup benchmarking study for supervised visual classification.
OpenMixup offers a unified mixup-based model design and training framework, encompassing a wide collection of data mixing algorithms, a diverse range of widely-used backbones and modules, and a set of model analysis toolkits.
- Score: 58.680100108871436
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data mixing, or mixup, is a data-dependent augmentation technique that has
greatly enhanced the generalizability of modern deep neural networks. However,
a full grasp of mixup methodology necessitates a top-down hierarchical
understanding from systematic impartial evaluations and empirical analysis,
both of which are currently lacking within the community. In this paper, we
present OpenMixup, the first comprehensive mixup benchmarking study for
supervised visual classification. OpenMixup offers a unified mixup-based model
design and training framework, encompassing a wide collection of data mixing
algorithms, a diverse range of widely-used backbones and modules, and a set of
model analysis toolkits. To ensure fair and complete comparisons, large-scale
standard evaluations of various mixup baselines are conducted across 12
diversified image datasets with meticulous confounders and tweaking powered by
our modular and extensible codebase framework. Interesting observations and
insights are derived through detailed empirical analysis of how mixup policies,
network architectures, and dataset properties affect the mixup visual
classification performance. We hope that OpenMixup can bolster the
reproducibility of previously gained insights and facilitate a better
understanding of mixup properties, thereby giving the community a kick-start
for the development and evaluation of new mixup methods. The source code and
user documents are available at \url{https://github.com/Westlake-AI/openmixup}.
Related papers
- PowMix: A Versatile Regularizer for Multimodal Sentiment Analysis [71.8946280170493]
This paper introduces PowMix, a versatile embedding space regularizer that builds upon the strengths of unimodal mixing-based regularization approaches.
PowMix is integrated before the fusion stage of multimodal architectures and facilitates intra-modal mixing, such as mixing text with text, to act as a regularizer.
arXiv Detail & Related papers (2023-12-19T17:01:58Z) - The Benefits of Mixup for Feature Learning [117.93273337740442]
We first show that Mixup using different linear parameters for features and labels can still achieve similar performance to standard Mixup.
We consider a feature-noise data model and show that Mixup training can effectively learn the rare features from its mixture with the common features.
In contrast, standard training can only learn the common features but fails to learn the rare features, thus suffering from bad performance.
arXiv Detail & Related papers (2023-03-15T08:11:47Z) - MixupE: Understanding and Improving Mixup from Directional Derivative
Perspective [86.06981860668424]
We propose an improved version of Mixup, theoretically justified to deliver better generalization performance than the vanilla Mixup.
Our results show that the proposed method improves Mixup across multiple datasets using a variety of architectures.
arXiv Detail & Related papers (2022-12-27T07:03:52Z) - C-Mixup: Improving Generalization in Regression [71.10418219781575]
Mixup algorithm improves generalization by linearly interpolating a pair of examples and their corresponding labels.
We propose C-Mixup, which adjusts the sampling probability based on the similarity of the labels.
C-Mixup achieves 6.56%, 4.76%, 5.82% improvements in in-distribution generalization, task generalization, and out-of-distribution robustness, respectively.
arXiv Detail & Related papers (2022-10-11T20:39:38Z) - AutoMix: Unveiling the Power of Mixup [34.623943038648164]
We present a flexible, general Automatic Mixup framework which utilizes discriminative features to learn a sample mixing policy adaptively.
We regard mixup as a pretext task and split it into two sub-problems: mixed samples generation and mixup classification.
Experiments on six popular classification benchmarks show that AutoMix consistently outperforms other leading mixup methods.
arXiv Detail & Related papers (2021-03-24T07:21:53Z) - MixMo: Mixing Multiple Inputs for Multiple Outputs via Deep Subnetworks [97.08677678499075]
We introduce MixMo, a new framework for learning multi-input multi-output deepworks.
We show that binary mixing in features - particularly with patches from CutMix - enhances results by makingworks stronger and more diverse.
In addition to being easy to implement and adding no cost at inference, our models outperform much costlier data augmented deep ensembles.
arXiv Detail & Related papers (2021-03-10T15:31:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.