Puzzle Mix: Exploiting Saliency and Local Statistics for Optimal Mixup
- URL: http://arxiv.org/abs/2009.06962v2
- Date: Wed, 30 Dec 2020 10:45:39 GMT
- Title: Puzzle Mix: Exploiting Saliency and Local Statistics for Optimal Mixup
- Authors: Jang-Hyun Kim, Wonho Choo, Hyun Oh Song
- Abstract summary: Puzzle Mix is a mixup method for explicitly utilizing the saliency information and the underlying statistics of the natural examples.
Our experiments show Puzzle Mix achieves the state of the art generalization and the adversarial robustness results.
- Score: 19.680580983094323
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While deep neural networks achieve great performance on fitting the training
distribution, the learned networks are prone to overfitting and are susceptible
to adversarial attacks. In this regard, a number of mixup based augmentation
methods have been recently proposed. However, these approaches mainly focus on
creating previously unseen virtual examples and can sometimes provide
misleading supervisory signal to the network. To this end, we propose Puzzle
Mix, a mixup method for explicitly utilizing the saliency information and the
underlying statistics of the natural examples. This leads to an interesting
optimization problem alternating between the multi-label objective for optimal
mixing mask and saliency discounted optimal transport objective. Our
experiments show Puzzle Mix achieves the state of the art generalization and
the adversarial robustness results compared to other mixup methods on
CIFAR-100, Tiny-ImageNet, and ImageNet datasets. The source code is available
at https://github.com/snu-mllab/PuzzleMix.
Related papers
- Adversarial AutoMixup [50.1874436169571]
We propose AdAutomixup, an adversarial automatic mixup augmentation approach.
It generates challenging samples to train a robust classifier for image classification.
Our approach outperforms the state of the art in various classification scenarios.
arXiv Detail & Related papers (2023-12-19T08:55:00Z) - GuidedMixup: An Efficient Mixup Strategy Guided by Saliency Maps [6.396288020763144]
We propose GuidedMixup, which aims to retain the salient regions in mixup images with low computational overhead.
We develop an efficient pairing algorithm that pursues to minimize the conflict of salient regions of paired images.
Experiments on several datasets demonstrate that GuidedMixup provides a good trade-off between augmentation overhead and generalization performance.
arXiv Detail & Related papers (2023-06-29T00:55:51Z) - MixupE: Understanding and Improving Mixup from Directional Derivative
Perspective [86.06981860668424]
We propose an improved version of Mixup, theoretically justified to deliver better generalization performance than the vanilla Mixup.
Our results show that the proposed method improves Mixup across multiple datasets using a variety of architectures.
arXiv Detail & Related papers (2022-12-27T07:03:52Z) - Expeditious Saliency-guided Mix-up through Random Gradient Thresholding [89.59134648542042]
Mix-up training approaches have proven to be effective in improving the generalization ability of Deep Neural Networks.
In this paper, inspired by the superior qualities of each direction over one another, we introduce a novel method that lies at the junction of the two routes.
We name our method R-Mix following the concept of "Random Mix-up"
In order to address the question of whether there exists a better decision protocol, we train a Reinforcement Learning agent that decides the mix-up policies.
arXiv Detail & Related papers (2022-12-09T14:29:57Z) - Harnessing Hard Mixed Samples with Decoupled Regularizer [69.98746081734441]
Mixup is an efficient data augmentation approach that improves the generalization of neural networks by smoothing the decision boundary with mixed data.
In this paper, we propose an efficient mixup objective function with a decoupled regularizer named Decoupled Mixup (DM)
DM can adaptively utilize hard mixed samples to mine discriminative features without losing the original smoothness of mixup.
arXiv Detail & Related papers (2022-03-21T07:12:18Z) - MixMo: Mixing Multiple Inputs for Multiple Outputs via Deep Subnetworks [97.08677678499075]
We introduce MixMo, a new framework for learning multi-input multi-output deepworks.
We show that binary mixing in features - particularly with patches from CutMix - enhances results by makingworks stronger and more diverse.
In addition to being easy to implement and adding no cost at inference, our models outperform much costlier data augmented deep ensembles.
arXiv Detail & Related papers (2021-03-10T15:31:02Z) - Co-Mixup: Saliency Guided Joint Mixup with Supermodular Diversity [15.780905917870427]
We propose a new perspective on batch mixup and formulate the optimal construction of a batch of mixup data.
We also propose an efficient modular approximation based iterative submodular computation algorithm for efficient mixup per each minibatch.
Our experiments show the proposed method achieves the state of the art generalization, calibration, and weakly supervised localization results.
arXiv Detail & Related papers (2021-02-05T09:12:02Z) - Attentive CutMix: An Enhanced Data Augmentation Approach for Deep
Learning Based Image Classification [58.20132466198622]
We propose Attentive CutMix, a naturally enhanced augmentation strategy based on CutMix.
In each training iteration, we choose the most descriptive regions based on the intermediate attention maps from a feature extractor.
Our proposed method is simple yet effective, easy to implement and can boost the baseline significantly.
arXiv Detail & Related papers (2020-03-29T15:01:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.