Improving Few-Shot Learning using Composite Rotation based Auxiliary
Task
- URL: http://arxiv.org/abs/2006.15919v2
- Date: Sun, 22 Nov 2020 17:39:51 GMT
- Title: Improving Few-Shot Learning using Composite Rotation based Auxiliary
Task
- Authors: Pratik Mazumder, Pravendra Singh and Vinay P. Namboodiri
- Abstract summary: We propose an approach to improve few-shot classification performance using a composite rotation based auxiliary task.
We experimentally show that our approach performs better than existing few-shot learning methods on multiple benchmark datasets.
- Score: 39.8046809855363
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose an approach to improve few-shot classification
performance using a composite rotation based auxiliary task. Few-shot
classification methods aim to produce neural networks that perform well for
classes with a large number of training samples and classes with less number of
training samples. They employ techniques to enable the network to produce
highly discriminative features that are also very generic. Generally, the
better the quality and generic-nature of the features produced by the network,
the better is the performance of the network on few-shot learning. Our approach
aims to train networks to produce such features by using a self-supervised
auxiliary task. Our proposed composite rotation based auxiliary task performs
rotation at two levels, i.e., rotation of patches inside the image (inner
rotation) and rotation of the whole image (outer rotation) and assigns one out
of 16 rotation classes to the modified image. We then simultaneously train for
the composite rotation prediction task along with the original classification
task, which forces the network to learn high-quality generic features that help
improve the few-shot classification performance. We experimentally show that
our approach performs better than existing few-shot learning methods on
multiple benchmark datasets.
Related papers
- Advancing Image Retrieval with Few-Shot Learning and Relevance Feedback [5.770351255180495]
Image Retrieval with Relevance Feedback (IRRF) involves iterative human interaction during the retrieval process.
We propose a new scheme based on a hyper-network, that is tailored to the task and facilitates swift adjustment to user feedback.
We show that our method can attain SoTA results in few-shot one-class classification and reach comparable results in binary classification task of few-shot open-set recognition.
arXiv Detail & Related papers (2023-12-18T10:20:28Z) - ART-Point: Improving Rotation Robustness of Point Cloud Classifiers via
Adversarial Rotation [89.47574181669903]
In this study, we show that the rotation robustness of point cloud classifiers can also be acquired via adversarial training.
Specifically, our proposed framework named ART-Point regards the rotation of the point cloud as an attack.
We propose a fast one-step optimization to efficiently reach the final robust model.
arXiv Detail & Related papers (2022-03-08T07:20:16Z) - Experience feedback using Representation Learning for Few-Shot Object
Detection on Aerial Images [2.8560476609689185]
The performance of our method is assessed on DOTA, a large-scale remote sensing images dataset.
It highlights in particular some intrinsic weaknesses for the few-shot object detection task.
arXiv Detail & Related papers (2021-09-27T13:04:53Z) - Cross-modal Adversarial Reprogramming [12.467311480726702]
Recent works on adversarial reprogramming have shown that it is possible to repurpose neural networks for alternate tasks without modifying the network architecture or parameters.
We analyze the feasibility of adversarially repurposing image classification neural networks for Natural Language Processing (NLP) and other sequence classification tasks.
arXiv Detail & Related papers (2021-02-15T03:46:16Z) - Few-shot Sequence Learning with Transformers [79.87875859408955]
Few-shot algorithms aim at learning new tasks provided only a handful of training examples.
In this work we investigate few-shot learning in the setting where the data points are sequences of tokens.
We propose an efficient learning algorithm based on Transformers.
arXiv Detail & Related papers (2020-12-17T12:30:38Z) - Graph-Based Neural Network Models with Multiple Self-Supervised
Auxiliary Tasks [79.28094304325116]
Graph Convolutional Networks are among the most promising approaches for capturing relationships among structured data points.
We propose three novel self-supervised auxiliary tasks to train graph-based neural network models in a multi-task fashion.
arXiv Detail & Related papers (2020-11-14T11:09:51Z) - Rotation Invariant Aerial Image Retrieval with Group Convolutional
Metric Learning [21.89786914625517]
We introduce a novel method for retrieving aerial images by merging group convolution with attention mechanism and metric learning.
Results show that the proposed method performance exceeds other state-of-the-art retrieval methods in both rotated and original environments.
arXiv Detail & Related papers (2020-10-19T04:12:36Z) - Region Comparison Network for Interpretable Few-shot Image
Classification [97.97902360117368]
Few-shot image classification has been proposed to effectively use only a limited number of labeled examples to train models for new classes.
We propose a metric learning based method named Region Comparison Network (RCN), which is able to reveal how few-shot learning works.
We also present a new way to generalize the interpretability from the level of tasks to categories.
arXiv Detail & Related papers (2020-09-08T07:29:05Z) - Flexible Example-based Image Enhancement with Task Adaptive Global
Feature Self-Guided Network [162.14579019053804]
We show that our model outperforms the current state of the art in learning a single enhancement mapping.
The model achieves even higher performance on learning multiple mappings simultaneously.
arXiv Detail & Related papers (2020-05-13T22:45:07Z) - Task Augmentation by Rotating for Meta-Learning [5.646772123578524]
We introduce a task augmentation method by rotating, which increases the number of classes by rotating the original images 90, 180 and 270 degrees.
Experimental results show that our approach is better than the rotation for increasing the number of images and state-of-the-art performance on miniImageNet, CIFAR-FS, and FC100 few-shot learning benchmarks.
arXiv Detail & Related papers (2020-02-08T07:57:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.