GridMask Data Augmentation
- URL: http://arxiv.org/abs/2001.04086v3
- Date: Thu, 1 Feb 2024 03:54:08 GMT
- Title: GridMask Data Augmentation
- Authors: Pengguang Chen, Shu Liu, Hengshuang Zhao, Xingquan Wang, Jiaya Jia
- Abstract summary: We propose a novel data augmentation method GridMask' in this paper.
It utilizes information removal to achieve state-of-the-art results in a variety of computer vision tasks.
- Score: 76.79300104795966
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel data augmentation method `GridMask' in this paper. It
utilizes information removal to achieve state-of-the-art results in a variety
of computer vision tasks. We analyze the requirement of information dropping.
Then we show limitation of existing information dropping algorithms and propose
our structured method, which is simple and yet very effective. It is based on
the deletion of regions of the input image. Our extensive experiments show that
our method outperforms the latest AutoAugment, which is way more
computationally expensive due to the use of reinforcement learning to find the
best policies. On the ImageNet dataset for recognition, COCO2017 object
detection, and on Cityscapes dataset for semantic segmentation, our method all
notably improves performance over baselines. The extensive experiments manifest
the effectiveness and generality of the new method.
Related papers
- Masked Image Modeling: A Survey [73.21154550957898]
Masked image modeling emerged as a powerful self-supervised learning technique in computer vision.
We construct a taxonomy and review the most prominent papers in recent years.
We aggregate the performance results of various masked image modeling methods on the most popular datasets.
arXiv Detail & Related papers (2024-08-13T07:27:02Z) - A Simple Background Augmentation Method for Object Detection with Diffusion Model [53.32935683257045]
In computer vision, it is well-known that a lack of data diversity will impair model performance.
We propose a simple yet effective data augmentation approach by leveraging advancements in generative models.
Background augmentation, in particular, significantly improves the models' robustness and generalization capabilities.
arXiv Detail & Related papers (2024-08-01T07:40:00Z) - On the Effect of Image Resolution on Semantic Segmentation [27.115235051091663]
We show that a model capable of directly producing high-resolution segmentations can match the performance of more complex systems.
Our approach leverages a bottom-up information propagation technique across various scales.
We have rigorously tested our method using leading-edge semantic segmentation datasets.
arXiv Detail & Related papers (2024-02-08T04:21:30Z) - Weakly Supervised Change Detection Using Guided Anisotropic Difusion [97.43170678509478]
We propose original ideas that help us to leverage such datasets in the context of change detection.
First, we propose the guided anisotropic diffusion (GAD) algorithm, which improves semantic segmentation results.
We then show its potential in two weakly-supervised learning strategies tailored for change detection.
arXiv Detail & Related papers (2021-12-31T10:03:47Z) - MEAL: Manifold Embedding-based Active Learning [0.0]
Active learning helps learning from small amounts of data by suggesting the most promising samples for labeling.
We propose a new pool-based method for active learning, which proposes promising image regions, in each acquisition step.
We find that our active learning method achieves better performance on CamVid compared to other methods, while on Cityscapes, the performance lift was negligible.
arXiv Detail & Related papers (2021-06-22T15:22:56Z) - Mixed-Privacy Forgetting in Deep Networks [114.3840147070712]
We show that the influence of a subset of the training samples can be removed from the weights of a network trained on large-scale image classification tasks.
Inspired by real-world applications of forgetting techniques, we introduce a novel notion of forgetting in mixed-privacy setting.
We show that our method allows forgetting without having to trade off the model accuracy.
arXiv Detail & Related papers (2020-12-24T19:34:56Z) - KeepAugment: A Simple Information-Preserving Data Augmentation Approach [42.164438736772134]
We propose a simple yet highly effective approach, dubbed emphKeepAugment, to increase augmented images fidelity.
The idea is first to use the saliency map to detect important regions on the original images and then preserve these informative regions during augmentation.
Empirically, we demonstrate our method significantly improves on a number of prior art data augmentation schemes.
arXiv Detail & Related papers (2020-11-23T22:43:04Z) - FenceMask: A Data Augmentation Approach for Pre-extracted Image Features [18.299882139724684]
We propose a novel data augmentation method named 'FenceMask'
It exhibits outstanding performance in various computer vision tasks.
Our method achieved significant performance improvement on Fine-Grained Visual Categorization task and VisDrone dataset.
arXiv Detail & Related papers (2020-06-14T12:16:16Z) - Rethinking Data Augmentation for Image Super-resolution: A Comprehensive
Analysis and a New Strategy [21.89072742618842]
We provide a comprehensive analysis of the existing augmentation methods applied to the super-resolution task.
We propose CutBlur that cuts a low-resolution patch and pastes it to the corresponding high-resolution image region and vice versa.
Our method consistently and significantly improves the performance across various scenarios.
arXiv Detail & Related papers (2020-04-01T13:49:38Z) - Attentive CutMix: An Enhanced Data Augmentation Approach for Deep
Learning Based Image Classification [58.20132466198622]
We propose Attentive CutMix, a naturally enhanced augmentation strategy based on CutMix.
In each training iteration, we choose the most descriptive regions based on the intermediate attention maps from a feature extractor.
Our proposed method is simple yet effective, easy to implement and can boost the baseline significantly.
arXiv Detail & Related papers (2020-03-29T15:01:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.