Task Augmentation by Rotating for Meta-Learning
- URL: http://arxiv.org/abs/2003.00804v1
- Date: Sat, 8 Feb 2020 07:57:24 GMT
- Title: Task Augmentation by Rotating for Meta-Learning
- Authors: Jialin Liu, Fei Chao, Chih-Min Lin
- Abstract summary: We introduce a task augmentation method by rotating, which increases the number of classes by rotating the original images 90, 180 and 270 degrees.
Experimental results show that our approach is better than the rotation for increasing the number of images and state-of-the-art performance on miniImageNet, CIFAR-FS, and FC100 few-shot learning benchmarks.
- Score: 5.646772123578524
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data augmentation is one of the most effective approaches for improving the
accuracy of modern machine learning models, and it is also indispensable to
train a deep model for meta-learning. In this paper, we introduce a task
augmentation method by rotating, which increases the number of classes by
rotating the original images 90, 180 and 270 degrees, different from
traditional augmentation methods which increase the number of images. With a
larger amount of classes, we can sample more diverse task instances during
training. Therefore, task augmentation by rotating allows us to train a deep
network by meta-learning methods with little over-fitting. Experimental results
show that our approach is better than the rotation for increasing the number of
images and achieves state-of-the-art performance on miniImageNet, CIFAR-FS, and
FC100 few-shot learning benchmarks. The code is available on
\url{www.github.com/AceChuse/TaskLevelAug}.
Related papers
- MOWA: Multiple-in-One Image Warping Model [65.73060159073644]
We propose a Multiple-in-One image warping model (named MOWA) in this work.
We mitigate the difficulty of multi-task learning by disentangling the motion estimation at both the region level and pixel level.
To our knowledge, this is the first work that solves multiple practical warping tasks in one single model.
arXiv Detail & Related papers (2024-04-16T16:50:35Z) - Gated Self-supervised Learning For Improving Supervised Learning [1.784933900656067]
We propose a novel approach to self-supervised learning for image classification using several localizable augmentations with the combination of the gating method.
Our approach uses flip and shuffle channel augmentations in addition to the rotation, allowing the model to learn rich features from the data.
arXiv Detail & Related papers (2023-01-14T09:32:12Z) - EfficientTrain: Exploring Generalized Curriculum Learning for Training
Visual Backbones [80.662250618795]
This paper presents a new curriculum learning approach for the efficient training of visual backbones (e.g., vision Transformers)
As an off-the-shelf method, it reduces the wall-time training cost of a wide variety of popular models by >1.5x on ImageNet-1K/22K without sacrificing accuracy.
arXiv Detail & Related papers (2022-11-17T17:38:55Z) - Masked Autoencoders are Robust Data Augmentors [90.34825840657774]
Regularization techniques like image augmentation are necessary for deep neural networks to generalize well.
We propose a novel perspective of augmentation to regularize the training process.
We show that utilizing such model-based nonlinear transformation as data augmentation can improve high-level recognition tasks.
arXiv Detail & Related papers (2022-06-10T02:41:48Z) - Residual Relaxation for Multi-view Representation Learning [64.40142301026805]
Multi-view methods learn by aligning multiple views of the same image.
Some useful augmentations, such as image rotation, are harmful for multi-view methods because they cause a semantic shift.
We develop a generic approach, Pretext-aware Residual Relaxation (Prelax), that relaxes the exact alignment.
arXiv Detail & Related papers (2021-10-28T17:57:17Z) - Memory Efficient Meta-Learning with Large Images [62.70515410249566]
Meta learning approaches to few-shot classification are computationally efficient at test time requiring just a few optimization steps or single forward pass to learn a new task.
This limitation arises because a task's entire support set, which can contain up to 1000 images, must be processed before an optimization step can be taken.
We propose LITE, a general and memory efficient episodic training scheme that enables meta-training on large tasks composed of large images on a single GPU.
arXiv Detail & Related papers (2021-07-02T14:37:13Z) - Learning to Resize Images for Computer Vision Tasks [15.381549764216134]
We show that the typical linear resizer can be replaced with learned resizers that can substantially improve performance.
Our learned image resizer is jointly trained with a baseline vision model.
We show that the proposed resizer can also be useful for fine-tuning the classification baselines for other vision tasks.
arXiv Detail & Related papers (2021-03-17T23:43:44Z) - Improving Few-Shot Learning using Composite Rotation based Auxiliary
Task [39.8046809855363]
We propose an approach to improve few-shot classification performance using a composite rotation based auxiliary task.
We experimentally show that our approach performs better than existing few-shot learning methods on multiple benchmark datasets.
arXiv Detail & Related papers (2020-06-29T10:21:35Z) - Rethinking Few-Shot Image Classification: a Good Embedding Is All You
Need? [72.00712736992618]
We show that a simple baseline: learning a supervised or self-supervised representation on the meta-training set, outperforms state-of-the-art few-shot learning methods.
An additional boost can be achieved through the use of self-distillation.
We believe that our findings motivate a rethinking of few-shot image classification benchmarks and the associated role of meta-learning algorithms.
arXiv Detail & Related papers (2020-03-25T17:58:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.