GuidedMixup: An Efficient Mixup Strategy Guided by Saliency Maps
- URL: http://arxiv.org/abs/2306.16612v1
- Date: Thu, 29 Jun 2023 00:55:51 GMT
- Title: GuidedMixup: An Efficient Mixup Strategy Guided by Saliency Maps
- Authors: Minsoo Kang, Suhyun Kim
- Abstract summary: We propose GuidedMixup, which aims to retain the salient regions in mixup images with low computational overhead.
We develop an efficient pairing algorithm that pursues to minimize the conflict of salient regions of paired images.
Experiments on several datasets demonstrate that GuidedMixup provides a good trade-off between augmentation overhead and generalization performance.
- Score: 6.396288020763144
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data augmentation is now an essential part of the image training process, as
it effectively prevents overfitting and makes the model more robust against
noisy datasets. Recent mixing augmentation strategies have advanced to generate
the mixup mask that can enrich the saliency information, which is a supervisory
signal. However, these methods incur a significant computational burden to
optimize the mixup mask. From this motivation, we propose a novel
saliency-aware mixup method, GuidedMixup, which aims to retain the salient
regions in mixup images with low computational overhead. We develop an
efficient pairing algorithm that pursues to minimize the conflict of salient
regions of paired images and achieve rich saliency in mixup images. Moreover,
GuidedMixup controls the mixup ratio for each pixel to better preserve the
salient region by interpolating two paired images smoothly. The experiments on
several datasets demonstrate that GuidedMixup provides a good trade-off between
augmentation overhead and generalization performance on classification
datasets. In addition, our method shows good performance in experiments with
corrupted or reduced datasets.
Related papers
- TransformMix: Learning Transformation and Mixing Strategies from Data [20.79680733590554]
We propose an automated approach, TransformMix, to learn better transformation and mixing augmentation strategies from data.
We demonstrate the effectiveness of TransformMix on multiple datasets in transfer learning, classification, object detection, and knowledge distillation settings.
arXiv Detail & Related papers (2024-03-19T04:36:41Z) - MiAMix: Enhancing Image Classification through a Multi-stage Augmented
Mixed Sample Data Augmentation Method [0.5919433278490629]
We introduce a novel mixup method called MiAMix, which stands for Multi-stage Augmented Mixup.
MiAMix integrates image augmentation into the mixup framework, utilizes multiple diversified mixing methods concurrently, and improves the mixing method by randomly selecting mixing mask augmentation methods.
Recent methods utilize saliency information and the MiAMix is designed for computational efficiency as well, reducing additional overhead and offering easy integration into existing training pipelines.
arXiv Detail & Related papers (2023-08-05T06:29:46Z) - MixupE: Understanding and Improving Mixup from Directional Derivative
Perspective [86.06981860668424]
We propose an improved version of Mixup, theoretically justified to deliver better generalization performance than the vanilla Mixup.
Our results show that the proposed method improves Mixup across multiple datasets using a variety of architectures.
arXiv Detail & Related papers (2022-12-27T07:03:52Z) - Deep Semantic Statistics Matching (D2SM) Denoising Network [70.01091467628068]
We introduce the Deep Semantic Statistics Matching (D2SM) Denoising Network.
It exploits semantic features of pretrained classification networks, then it implicitly matches the probabilistic distribution of clear images at the semantic feature space.
By learning to preserve the semantic distribution of denoised images, we empirically find our method significantly improves the denoising capabilities of networks.
arXiv Detail & Related papers (2022-07-19T14:35:42Z) - Harnessing Hard Mixed Samples with Decoupled Regularizer [69.98746081734441]
Mixup is an efficient data augmentation approach that improves the generalization of neural networks by smoothing the decision boundary with mixed data.
In this paper, we propose an efficient mixup objective function with a decoupled regularizer named Decoupled Mixup (DM)
DM can adaptively utilize hard mixed samples to mine discriminative features without losing the original smoothness of mixup.
arXiv Detail & Related papers (2022-03-21T07:12:18Z) - ResizeMix: Mixing Data with Preserved Object Information and True Labels [57.00554495298033]
We study the importance of the saliency information for mixing data, and find that the saliency information is not so necessary for promoting the augmentation performance.
We propose a more effective but very easily implemented method, namely ResizeMix.
arXiv Detail & Related papers (2020-12-21T03:43:13Z) - SnapMix: Semantically Proportional Mixing for Augmenting Fine-grained
Data [124.95585891086894]
Proposal is called Semantically Proportional Mixing (SnapMix)
It exploits class activation map (CAM) to lessen the label noise in augmenting fine-grained data.
Our method consistently outperforms existing mixed-based approaches.
arXiv Detail & Related papers (2020-12-09T03:37:30Z) - Puzzle Mix: Exploiting Saliency and Local Statistics for Optimal Mixup [19.680580983094323]
Puzzle Mix is a mixup method for explicitly utilizing the saliency information and the underlying statistics of the natural examples.
Our experiments show Puzzle Mix achieves the state of the art generalization and the adversarial robustness results.
arXiv Detail & Related papers (2020-09-15T10:10:23Z) - Attentive CutMix: An Enhanced Data Augmentation Approach for Deep
Learning Based Image Classification [58.20132466198622]
We propose Attentive CutMix, a naturally enhanced augmentation strategy based on CutMix.
In each training iteration, we choose the most descriptive regions based on the intermediate attention maps from a feature extractor.
Our proposed method is simple yet effective, easy to implement and can boost the baseline significantly.
arXiv Detail & Related papers (2020-03-29T15:01:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.