ResizeMix: Mixing Data with Preserved Object Information and True Labels
- URL: http://arxiv.org/abs/2012.11101v1
- Date: Mon, 21 Dec 2020 03:43:13 GMT
- Title: ResizeMix: Mixing Data with Preserved Object Information and True Labels
- Authors: Jie Qin, Jiemin Fang, Qian Zhang, Wenyu Liu, Xingang Wang, Xinggang
Wang
- Abstract summary: We study the importance of the saliency information for mixing data, and find that the saliency information is not so necessary for promoting the augmentation performance.
We propose a more effective but very easily implemented method, namely ResizeMix.
- Score: 57.00554495298033
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data augmentation is a powerful technique to increase the diversity of data,
which can effectively improve the generalization ability of neural networks in
image recognition tasks. Recent data mixing based augmentation strategies have
achieved great success. Especially, CutMix uses a simple but effective method
to improve the classifiers by randomly cropping a patch from one image and
pasting it on another image. To further promote the performance of CutMix, a
series of works explore to use the saliency information of the image to guide
the mixing. We systematically study the importance of the saliency information
for mixing data, and find that the saliency information is not so necessary for
promoting the augmentation performance. Furthermore, we find that the cutting
based data mixing methods carry two problems of label misallocation and object
information missing, which cannot be resolved simultaneously. We propose a more
effective but very easily implemented method, namely ResizeMix. We mix the data
by directly resizing the source image to a small patch and paste it on another
image. The obtained patch preserves more substantial object information
compared with conventional cut-based methods. ResizeMix shows evident
advantages over CutMix and the saliency-guided methods on both image
classification and object detection tasks without additional computation cost,
which even outperforms most costly search-based automatic augmentation methods.
Related papers
- TransformMix: Learning Transformation and Mixing Strategies from Data [20.79680733590554]
We propose an automated approach, TransformMix, to learn better transformation and mixing augmentation strategies from data.
We demonstrate the effectiveness of TransformMix on multiple datasets in transfer learning, classification, object detection, and knowledge distillation settings.
arXiv Detail & Related papers (2024-03-19T04:36:41Z) - SpliceMix: A Cross-scale and Semantic Blending Augmentation Strategy for
Multi-label Image Classification [46.8141860303439]
We introduce a simple but effective augmentation strategy for multi-label image classification, namely SpliceMix.
The "splice" in our method is two-fold: 1) Each mixed image is a splice of several downsampled images in the form of a grid, where the semantics of images attending to mixing are blended without object deficiencies for alleviating co-occurred bias; 2) We splice mixed images and the original mini-batch to form a new SpliceMixed mini-batch, which allows an image with different scales to contribute to training together.
arXiv Detail & Related papers (2023-11-26T05:45:27Z) - GuidedMixup: An Efficient Mixup Strategy Guided by Saliency Maps [6.396288020763144]
We propose GuidedMixup, which aims to retain the salient regions in mixup images with low computational overhead.
We develop an efficient pairing algorithm that pursues to minimize the conflict of salient regions of paired images.
Experiments on several datasets demonstrate that GuidedMixup provides a good trade-off between augmentation overhead and generalization performance.
arXiv Detail & Related papers (2023-06-29T00:55:51Z) - Use the Detection Transformer as a Data Augmenter [13.15197086963704]
DeMix builds on CutMix, a simple yet highly effective data augmentation technique.
CutMix improves model performance by cutting and pasting a patch from one image onto another, yielding a new image.
DeMix elaborately selects a semantically rich patch, located by a pre-trained DETR.
arXiv Detail & Related papers (2023-04-10T12:50:17Z) - SMMix: Self-Motivated Image Mixing for Vision Transformers [65.809376136455]
CutMix is a vital augmentation strategy that determines the performance and generalization ability of vision transformers (ViTs)
Existing CutMix variants tackle this problem by generating more consistent mixed images or more precise mixed labels.
We propose an efficient and effective Self-Motivated image Mixing method (SMMix) which motivates both image and label enhancement by the model under training itself.
arXiv Detail & Related papers (2022-12-26T00:19:39Z) - CropMix: Sampling a Rich Input Distribution via Multi-Scale Cropping [97.05377757299672]
We present a simple method, CropMix, for producing a rich input distribution from the original dataset distribution.
CropMix can be seamlessly applied to virtually any training recipe and neural network architecture performing classification tasks.
We show that CropMix is of benefit to both contrastive learning and masked image modeling towards more powerful representations.
arXiv Detail & Related papers (2022-05-31T16:57:28Z) - SnapMix: Semantically Proportional Mixing for Augmenting Fine-grained
Data [124.95585891086894]
Proposal is called Semantically Proportional Mixing (SnapMix)
It exploits class activation map (CAM) to lessen the label noise in augmenting fine-grained data.
Our method consistently outperforms existing mixed-based approaches.
arXiv Detail & Related papers (2020-12-09T03:37:30Z) - Attentive CutMix: An Enhanced Data Augmentation Approach for Deep
Learning Based Image Classification [58.20132466198622]
We propose Attentive CutMix, a naturally enhanced augmentation strategy based on CutMix.
In each training iteration, we choose the most descriptive regions based on the intermediate attention maps from a feature extractor.
Our proposed method is simple yet effective, easy to implement and can boost the baseline significantly.
arXiv Detail & Related papers (2020-03-29T15:01:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.