You Only Cut Once: Boosting Data Augmentation with a Single Cut
- URL: http://arxiv.org/abs/2201.12078v1
- Date: Fri, 28 Jan 2022 12:34:40 GMT
- Title: You Only Cut Once: Boosting Data Augmentation with a Single Cut
- Authors: Junlin Han, Pengfei Fang, Weihao Li, Jie Hong, Mohammad Ali Armin, Ian
Reid, Lars Petersson, Hongdong Li
- Abstract summary: We present You Only Cut Once (YOCO) for performing data augmentations.
YOCO cuts one image into two pieces and performs data augmentations individually within each piece.
Applying YOCO improves the diversity of the augmentation per sample and encourages neural networks to recognize objects from partial information.
- Score: 85.90978190685837
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present You Only Cut Once (YOCO) for performing data augmentations. YOCO
cuts one image into two pieces and performs data augmentations individually
within each piece. Applying YOCO improves the diversity of the augmentation per
sample and encourages neural networks to recognize objects from partial
information. YOCO enjoys the properties of parameter-free, easy usage, and
boosting almost all augmentations for free. Thorough experiments are conducted
to evaluate its effectiveness. We first demonstrate that YOCO can be seamlessly
applied to varying data augmentations, neural network architectures, and brings
performance gains on CIFAR and ImageNet classification tasks, sometimes
surpassing conventional image-level augmentation by large margins. Moreover, we
show YOCO benefits contrastive pre-training toward a more powerful
representation that can be better transferred to multiple downstream tasks.
Finally, we study a number of variants of YOCO and empirically analyze the
performance for respective settings. Code is available at GitHub.
Related papers
- Do We Need All the Synthetic Data? Towards Targeted Synthetic Image Augmentation via Diffusion Models [12.472871440252105]
We show that synthetically augmenting part of the data that is not learned early in training outperforms augmenting the entire dataset.<n>Our method boosts the performance by up to2.8% in a variety of scenarios.<n>It can also easily stack with existing weak and strong augmentation strategies to further boost the performance.
arXiv Detail & Related papers (2025-05-27T07:27:03Z) - Enhancing Image Classification with Augmentation: Data Augmentation Techniques for Improved Image Classification [0.0]
Convolutional Neural Networks (CNNs) serve as the workhorse of deep learning, finding applications in various fields that rely on images.
In this study, we explore the effectiveness of 11 different sets of data augmentation techniques, which include three novel sets proposed in this work.
The ensemble of image augmentation techniques proposed emerges as the most effective on the Caltech-101 dataset.
arXiv Detail & Related papers (2025-02-25T23:03:30Z) - You Only Need Half: Boosting Data Augmentation by Using Partial Content [5.611768906855499]
We propose a novel data augmentation method termed You Only Need hAlf (YONA)
YONA bisects an image, substitutes one half with noise, and applies data augmentation techniques to the remaining half.
This method reduces the redundant information in the original image, encourages neural networks to recognize objects from incomplete views, and significantly enhances neural networks' robustness.
arXiv Detail & Related papers (2024-05-05T06:57:40Z) - Boosting Semi-Supervised 2D Human Pose Estimation by Revisiting Data Augmentation and Consistency Training [54.074020740827855]
We find that SSHPE can be boosted from two cores: advanced data augmentations and concise consistency training ways.
This simple and compact design is interpretable, and easily benefits from newly found augmentations.
We extensively validate the superiority and versatility of our approach on conventional human body images, overhead fisheye images, and human hand images.
arXiv Detail & Related papers (2024-02-18T12:27:59Z) - Premier-TACO is a Few-Shot Policy Learner: Pretraining Multitask Representation via Temporal Action-Driven Contrastive Loss [61.355272240758]
Premier-TACO is a multitask feature representation learning approach.
It is designed to improve few-shot policy learning efficiency in sequential decision-making tasks.
arXiv Detail & Related papers (2024-02-09T05:04:40Z) - Soft Augmentation for Image Classification [68.71067594724663]
We propose generalizing augmentation with invariant transforms to soft augmentation.
We show that soft targets allow for more aggressive data augmentation.
We also show that soft augmentations generalize to self-supervised classification tasks.
arXiv Detail & Related papers (2022-11-09T01:04:06Z) - ScoreMix: A Scalable Augmentation Strategy for Training GANs with
Limited Data [93.06336507035486]
Generative Adversarial Networks (GANs) typically suffer from overfitting when limited training data is available.
We present ScoreMix, a novel and scalable data augmentation approach for various image synthesis tasks.
arXiv Detail & Related papers (2022-10-27T02:55:15Z) - Solar Potential Assessment using Multi-Class Buildings Segmentation from
Aerial Images [3.180674374101366]
We exploit the power of fully convolutional neural networks for an instance segmentation task using extra added classes to the output.
We also show that CutMix mixed data augmentations and the One-Cycle learning rate policy are greater regularization methods to achieve a better fit on the training data.
arXiv Detail & Related papers (2021-11-22T18:16:07Z) - Augmentation Pathways Network for Visual Recognition [61.33084317147437]
This paper introduces Augmentation Pathways (AP) to stabilize training on a much wider range of augmentation policies.
AP tames heavy data augmentations and stably boosts performance without a careful selection among augmentation policies.
Experimental results on ImageNet benchmarks demonstrate the compatibility and effectiveness on a much wider range of augmentations.
arXiv Detail & Related papers (2021-07-26T06:54:53Z) - Hierarchical Self-Supervised Learning for Medical Image Segmentation
Based on Multi-Domain Data Aggregation [23.616336382437275]
We propose Hierarchical Self-Supervised Learning (HSSL) for medical image segmentation.
We first aggregate a dataset from several medical challenges, then pre-train the network in a self-supervised manner, and finally fine-tune on labeled data.
Compared to learning from scratch, our new method yields better performance on various tasks.
arXiv Detail & Related papers (2021-07-10T18:17:57Z) - Dataset Condensation with Differentiable Siamese Augmentation [30.571335208276246]
We focus on condensing large training sets into significantly smaller synthetic sets which can be used to train deep neural networks.
We propose Differentiable Siamese Augmentation that enables effective use of data augmentation to synthesize more informative synthetic images.
We show with only less than 1% data that our method achieves 99.6%, 94.9%, 88.5%, 71.5% relative performance on MNIST, FashionMNIST, SVHN, CIFAR10 respectively.
arXiv Detail & Related papers (2021-02-16T16:32:21Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.