MoBYv2AL: Self-supervised Active Learning for Image Classification
- URL: http://arxiv.org/abs/2301.01531v1
- Date: Wed, 4 Jan 2023 10:52:02 GMT
- Title: MoBYv2AL: Self-supervised Active Learning for Image Classification
- Authors: Razvan Caramalau, Binod Bhattarai, Danail Stoyanov, Tae-Kyun Kim
- Abstract summary: We present MoBYv2AL, a novel self-supervised active learning framework for image classification.
Our contribution lies in lifting MoBY, one of the most successful self-supervised learning algorithms, to the AL pipeline.
We achieve state-of-the-art results when compared to recent AL methods.
- Score: 57.4372176671293
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active learning(AL) has recently gained popularity for deep learning(DL)
models. This is due to efficient and informative sampling, especially when the
learner requires large-scale labelled datasets. Commonly, the sampling and
training happen in stages while more batches are added. One main bottleneck in
this strategy is the narrow representation learned by the model that affects
the overall AL selection.
We present MoBYv2AL, a novel self-supervised active learning framework for
image classification. Our contribution lies in lifting MoBY, one of the most
successful self-supervised learning algorithms, to the AL pipeline. Thus, we
add the downstream task-aware objective function and optimize it jointly with
contrastive loss. Further, we derive a data-distribution selection function
from labelling the new examples. Finally, we test and study our pipeline
robustness and performance for image classification tasks. We successfully
achieved state-of-the-art results when compared to recent AL methods. Code
available: https://github.com/razvancaramalau/MoBYv2AL
Related papers
- MOCA: Self-supervised Representation Learning by Predicting Masked Online Codebook Assignments [72.6405488990753]
Self-supervised learning can be used for mitigating the greedy needs of Vision Transformer networks.
We propose a single-stage and standalone method, MOCA, which unifies both desired properties.
We achieve new state-of-the-art results on low-shot settings and strong experimental results in various evaluation protocols.
arXiv Detail & Related papers (2023-07-18T15:46:20Z) - L2B: Learning to Bootstrap Robust Models for Combating Label Noise [52.02335367411447]
This paper introduces a simple and effective method, named Learning to Bootstrap (L2B)
It enables models to bootstrap themselves using their own predictions without being adversely affected by erroneous pseudo-labels.
It achieves this by dynamically adjusting the importance weight between real observed and generated labels, as well as between different samples through meta-learning.
arXiv Detail & Related papers (2022-02-09T05:57:08Z) - A Lagrangian Duality Approach to Active Learning [119.36233726867992]
We consider the batch active learning problem, where only a subset of the training data is labeled.
We formulate the learning problem using constrained optimization, where each constraint bounds the performance of the model on labeled samples.
We show, via numerical experiments, that our proposed approach performs similarly to or better than state-of-the-art active learning methods.
arXiv Detail & Related papers (2022-02-08T19:18:49Z) - Optimizing Active Learning for Low Annotation Budgets [6.753808772846254]
In deep learning, active learning is usually implemented as an iterative process in which successive deep models are updated via fine tuning.
We tackle this issue by using an approach inspired by transfer learning.
We introduce a novel acquisition function which exploits the iterative nature of AL process to select samples in a more robust fashion.
arXiv Detail & Related papers (2022-01-18T18:53:10Z) - Active Learning at the ImageNet Scale [43.595076693347835]
In this work, we study a combination of active learning (AL) and pretraining (SSP) on ImageNet.
We find that performance on small toy datasets is not representative of performance on ImageNet due to the class imbalanced samples selected by an active learner.
We propose Balanced Selection (BASE), a simple, scalable AL algorithm that outperforms random sampling consistently.
arXiv Detail & Related papers (2021-11-25T02:48:51Z) - Visual Transformer for Task-aware Active Learning [49.903358393660724]
We present a novel pipeline for pool-based Active Learning.
Our method exploits accessible unlabelled examples during training to estimate their co-relation with the labelled examples.
Visual Transformer models non-local visual concept dependency between labelled and unlabelled examples.
arXiv Detail & Related papers (2021-06-07T17:13:59Z) - Learning to Rank for Active Learning: A Listwise Approach [36.72443179449176]
Active learning emerged as an alternative to alleviate the effort to label huge amount of data for data hungry applications.
In this work, we rethink the structure of the loss prediction module, using a simple but effective listwise approach.
Experimental results on four datasets demonstrate that our method outperforms recent state-of-the-art active learning approaches for both image classification and regression tasks.
arXiv Detail & Related papers (2020-07-31T21:05:16Z) - MetAL: Active Semi-Supervised Learning on Graphs via Meta Learning [2.903711704663904]
We propose MetAL, an AL approach that selects unlabeled instances that directly improve the future performance of a classification model.
We demonstrate that MetAL efficiently outperforms existing state-of-the-art AL algorithms.
arXiv Detail & Related papers (2020-07-22T06:59:49Z) - SCAN: Learning to Classify Images without Labels [73.69513783788622]
We advocate a two-step approach where feature learning and clustering are decoupled.
A self-supervised task from representation learning is employed to obtain semantically meaningful features.
We obtain promising results on ImageNet, and outperform several semi-supervised learning methods in the low-data regime.
arXiv Detail & Related papers (2020-05-25T18:12:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.