Visual Transformer for Task-aware Active Learning
- URL: http://arxiv.org/abs/2106.03801v1
- Date: Mon, 7 Jun 2021 17:13:59 GMT
- Title: Visual Transformer for Task-aware Active Learning
- Authors: Razvan Caramalau, Binod Bhattarai, Tae-Kyun Kim
- Abstract summary: We present a novel pipeline for pool-based Active Learning.
Our method exploits accessible unlabelled examples during training to estimate their co-relation with the labelled examples.
Visual Transformer models non-local visual concept dependency between labelled and unlabelled examples.
- Score: 49.903358393660724
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pool-based sampling in active learning (AL) represents a key framework for
an-notating informative data when dealing with deep learning models. In this
paper, we present a novel pipeline for pool-based Active Learning. Unlike most
previous works, our method exploits accessible unlabelled examples during
training to estimate their co-relation with the labelled examples. Another
contribution of this paper is to adapt Visual Transformer as a sampler in the
AL pipeline. Visual Transformer models non-local visual concept dependency
between labelled and unlabelled examples, which is crucial to identifying the
influencing unlabelled examples. Also, compared to existing methods where the
learner and the sampler are trained in a multi-stage manner, we propose to
train them in a task-aware jointly manner which enables transforming the latent
space into two separate tasks: one that classifies the labelled examples; the
other that distinguishes the labelling direction. We evaluated our work on four
different challenging benchmarks of classification and detection tasks viz.
CIFAR10, CIFAR100,FashionMNIST, RaFD, and Pascal VOC 2007. Our extensive
empirical and qualitative evaluations demonstrate the superiority of our method
compared to the existing methods. Code available:
https://github.com/razvancaramalau/Visual-Transformer-for-Task-aware-Active-Learning
Related papers
- With a Little Help from your own Past: Prototypical Memory Networks for
Image Captioning [47.96387857237473]
We devise a network which can perform attention over activations obtained while processing other training samples.
Our memory models the distribution of past keys and values through the definition of prototype vectors.
We demonstrate that our proposal can increase the performance of an encoder-decoder Transformer by 3.7 CIDEr points both when training in cross-entropy only and when fine-tuning with self-critical sequence training.
arXiv Detail & Related papers (2023-08-23T18:53:00Z) - MoBYv2AL: Self-supervised Active Learning for Image Classification [57.4372176671293]
We present MoBYv2AL, a novel self-supervised active learning framework for image classification.
Our contribution lies in lifting MoBY, one of the most successful self-supervised learning algorithms, to the AL pipeline.
We achieve state-of-the-art results when compared to recent AL methods.
arXiv Detail & Related papers (2023-01-04T10:52:02Z) - Spatial Cross-Attention Improves Self-Supervised Visual Representation
Learning [5.085461418671174]
We introduce an add-on module to facilitate the injection of the knowledge accounting for spatial cross correlations among the samples.
This in turn results in distilling intra-class information including feature level locations and cross similarities between same-class instances.
arXiv Detail & Related papers (2022-06-07T21:14:52Z) - CoCon: Cooperative-Contrastive Learning [52.342936645996765]
Self-supervised visual representation learning is key for efficient video analysis.
Recent success in learning image representations suggests contrastive learning is a promising framework to tackle this challenge.
We introduce a cooperative variant of contrastive learning to utilize complementary information across views.
arXiv Detail & Related papers (2021-04-30T05:46:02Z) - Few-shot Sequence Learning with Transformers [79.87875859408955]
Few-shot algorithms aim at learning new tasks provided only a handful of training examples.
In this work we investigate few-shot learning in the setting where the data points are sequences of tokens.
We propose an efficient learning algorithm based on Transformers.
arXiv Detail & Related papers (2020-12-17T12:30:38Z) - A Survey on Contrastive Self-supervised Learning [0.0]
Self-supervised learning has gained popularity because of its ability to avoid the cost of annotating large-scale datasets.
Contrastive learning has recently become a dominant component in self-supervised learning methods for computer vision, natural language processing (NLP), and other domains.
This paper provides an extensive review of self-supervised methods that follow the contrastive approach.
arXiv Detail & Related papers (2020-10-31T21:05:04Z) - CSI: Novelty Detection via Contrastive Learning on Distributionally
Shifted Instances [77.28192419848901]
We propose a simple, yet effective method named contrasting shifted instances (CSI)
In addition to contrasting a given sample with other instances as in conventional contrastive learning methods, our training scheme contrasts the sample with distributionally-shifted augmentations of itself.
Our experiments demonstrate the superiority of our method under various novelty detection scenarios.
arXiv Detail & Related papers (2020-07-16T08:32:56Z) - Meta-Learning for One-Class Classification with Few Examples using
Order-Equivariant Network [1.08890978642722]
This paper presents a framework for few-shots One-Class Classification (OCC) at test-time.
We consider that we have a set of one-class classification' objective-tasks with only a small set of positive examples available for each task.
We propose an approach using order-equivariant networks to learn a'meta' binary-classifier.
arXiv Detail & Related papers (2020-07-08T22:33:09Z) - UniT: Unified Knowledge Transfer for Any-shot Object Detection and
Segmentation [52.487469544343305]
Methods for object detection and segmentation rely on large scale instance-level annotations for training.
We propose an intuitive and unified semi-supervised model that is applicable to a range of supervision.
arXiv Detail & Related papers (2020-06-12T22:45:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.