Local Magnification for Data and Feature Augmentation
- URL: http://arxiv.org/abs/2211.07859v1
- Date: Tue, 15 Nov 2022 02:51:59 GMT
- Title: Local Magnification for Data and Feature Augmentation
- Authors: Kun He, Chang Liu, Stephen Lin, John E. Hopcroft
- Abstract summary: We propose an easy-to-implement and model-free data augmentation method called Local Magnification (LOMA)
LOMA generates additional training data by randomly magnifying a local area of the image.
Experiments show that our proposed LOMA, though straightforward, can be combined with standard data augmentation to significantly improve the performance on image classification and object detection.
- Score: 53.04028225837681
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, many data augmentation techniques have been proposed to
increase the diversity of input data and reduce the risk of overfitting on deep
neural networks. In this work, we propose an easy-to-implement and model-free
data augmentation method called Local Magnification (LOMA). Different from
other geometric data augmentation methods that perform global transformations
on images, LOMA generates additional training data by randomly magnifying a
local area of the image. This local magnification results in geometric changes
that significantly broaden the range of augmentations while maintaining the
recognizability of objects. Moreover, we extend the idea of LOMA and random
cropping to the feature space to augment the feature map, which further boosts
the classification accuracy considerably. Experiments show that our proposed
LOMA, though straightforward, can be combined with standard data augmentation
to significantly improve the performance on image classification and object
detection. And further combination with our feature augmentation techniques,
termed LOMA_IF&FO, can continue to strengthen the model and outperform advanced
intensity transformation methods for data augmentation.
Related papers
- Erase, then Redraw: A Novel Data Augmentation Approach for Free Space Detection Using Diffusion Model [5.57325257338134]
Traditional data augmentation methods cannot alter high-level semantic attributes.
We propose a text-to-image diffusion model to parameterize image-to-image transformations.
We achieve this goal by erasing instances of real objects from the original dataset and generating new instances with similar semantics in the erased regions.
arXiv Detail & Related papers (2024-09-30T10:21:54Z) - A Simple Background Augmentation Method for Object Detection with Diffusion Model [53.32935683257045]
In computer vision, it is well-known that a lack of data diversity will impair model performance.
We propose a simple yet effective data augmentation approach by leveraging advancements in generative models.
Background augmentation, in particular, significantly improves the models' robustness and generalization capabilities.
arXiv Detail & Related papers (2024-08-01T07:40:00Z) - Select-Mosaic: Data Augmentation Method for Dense Small Object Scenes [4.418515380386838]
Mosaic data augmentation technique stitches multiple images together to increase the diversity and complexity of training data.
This paper proposes the Select-Mosaic data augmentation method, which is improved with a fine-grained region selection strategy.
The improved Select-Mosaic method demonstrates superior performance in handling dense small object detection tasks.
arXiv Detail & Related papers (2024-06-08T09:22:08Z) - AugDiff: Diffusion based Feature Augmentation for Multiple Instance
Learning in Whole Slide Image [15.180437840817788]
Multiple Instance Learning (MIL), a powerful strategy for weakly supervised learning, is able to perform various prediction tasks on gigapixel Whole Slide Images (WSIs)
We introduce the Diffusion Model (DM) into MIL for the first time and propose a feature augmentation framework called AugDiff.
We conduct extensive experiments over three distinct cancer datasets, two different feature extractors, and three prevalent MIL algorithms to evaluate the performance of AugDiff.
arXiv Detail & Related papers (2023-03-11T10:36:27Z) - Effective Data Augmentation With Diffusion Models [65.09758931804478]
We address the lack of diversity in data augmentation with image-to-image transformations parameterized by pre-trained text-to-image diffusion models.
Our method edits images to change their semantics using an off-the-shelf diffusion model, and generalizes to novel visual concepts from a few labelled examples.
We evaluate our approach on few-shot image classification tasks, and on a real-world weed recognition task, and observe an improvement in accuracy in tested domains.
arXiv Detail & Related papers (2023-02-07T20:42:28Z) - Learning Representational Invariances for Data-Efficient Action
Recognition [52.23716087656834]
We show that our data augmentation strategy leads to promising performance on the Kinetics-100, UCF-101, and HMDB-51 datasets.
We also validate our data augmentation strategy in the fully supervised setting and demonstrate improved performance.
arXiv Detail & Related papers (2021-03-30T17:59:49Z) - Context Decoupling Augmentation for Weakly Supervised Semantic
Segmentation [53.49821324597837]
Weakly supervised semantic segmentation is a challenging problem that has been deeply studied in recent years.
We present a Context Decoupling Augmentation ( CDA) method to change the inherent context in which the objects appear.
To validate the effectiveness of the proposed method, extensive experiments on PASCAL VOC 2012 dataset with several alternative network architectures demonstrate that CDA can boost various popular WSSS methods to the new state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-03-02T15:05:09Z) - Adaptive Weighting Scheme for Automatic Time-Series Data Augmentation [79.47771259100674]
We present two sample-adaptive automatic weighting schemes for data augmentation.
We validate our proposed methods on a large, noisy financial dataset and on time-series datasets from the UCR archive.
On the financial dataset, we show that the methods in combination with a trading strategy lead to improvements in annualized returns of over 50$%$, and on the time-series data we outperform state-of-the-art models on over half of the datasets, and achieve similar performance in accuracy on the others.
arXiv Detail & Related papers (2021-02-16T17:50:51Z) - KeepAugment: A Simple Information-Preserving Data Augmentation Approach [42.164438736772134]
We propose a simple yet highly effective approach, dubbed emphKeepAugment, to increase augmented images fidelity.
The idea is first to use the saliency map to detect important regions on the original images and then preserve these informative regions during augmentation.
Empirically, we demonstrate our method significantly improves on a number of prior art data augmentation schemes.
arXiv Detail & Related papers (2020-11-23T22:43:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.