MoBYv2AL: Self-supervised Active Learning for Image Classification
- URL: http://arxiv.org/abs/2301.01531v1
- Date: Wed, 4 Jan 2023 10:52:02 GMT
- Title: MoBYv2AL: Self-supervised Active Learning for Image Classification
- Authors: Razvan Caramalau, Binod Bhattarai, Danail Stoyanov, Tae-Kyun Kim
- Abstract summary: We present MoBYv2AL, a novel self-supervised active learning framework for image classification.
Our contribution lies in lifting MoBY, one of the most successful self-supervised learning algorithms, to the AL pipeline.
We achieve state-of-the-art results when compared to recent AL methods.
- Score: 57.4372176671293
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active learning(AL) has recently gained popularity for deep learning(DL)
models. This is due to efficient and informative sampling, especially when the
learner requires large-scale labelled datasets. Commonly, the sampling and
training happen in stages while more batches are added. One main bottleneck in
this strategy is the narrow representation learned by the model that affects
the overall AL selection.
We present MoBYv2AL, a novel self-supervised active learning framework for
image classification. Our contribution lies in lifting MoBY, one of the most
successful self-supervised learning algorithms, to the AL pipeline. Thus, we
add the downstream task-aware objective function and optimize it jointly with
contrastive loss. Further, we derive a data-distribution selection function
from labelling the new examples. Finally, we test and study our pipeline
robustness and performance for image classification tasks. We successfully
achieved state-of-the-art results when compared to recent AL methods. Code
available: https://github.com/razvancaramalau/MoBYv2AL
Related papers
- Class Balance Matters to Active Class-Incremental Learning [61.11786214164405]
We aim to start from a pool of large-scale unlabeled data and then annotate the most informative samples for incremental learning.
We propose Class-Balanced Selection (CBS) strategy to achieve both class balance and informativeness in chosen samples.
Our CBS can be plugged and played into those CIL methods which are based on pretrained models with prompts tunning technique.
arXiv Detail & Related papers (2024-12-09T16:37:27Z) - Active Learning via Classifier Impact and Greedy Selection for Interactive Image Retrieval [4.699825956909531]
Active Learning (AL) is a user-interactive approach aimed at reducing annotation costs by selecting the most crucial examples to label.
We introduce a novel batch-mode Active Learning framework named GAL (Greedy Active Learning) that better copes with this application.
arXiv Detail & Related papers (2024-12-03T09:27:46Z) - MOCA: Self-supervised Representation Learning by Predicting Masked Online Codebook Assignments [72.6405488990753]
Self-supervised learning can be used for mitigating the greedy needs of Vision Transformer networks.
We propose a single-stage and standalone method, MOCA, which unifies both desired properties.
We achieve new state-of-the-art results on low-shot settings and strong experimental results in various evaluation protocols.
arXiv Detail & Related papers (2023-07-18T15:46:20Z) - A Lagrangian Duality Approach to Active Learning [119.36233726867992]
We consider the batch active learning problem, where only a subset of the training data is labeled.
We formulate the learning problem using constrained optimization, where each constraint bounds the performance of the model on labeled samples.
We show, via numerical experiments, that our proposed approach performs similarly to or better than state-of-the-art active learning methods.
arXiv Detail & Related papers (2022-02-08T19:18:49Z) - Optimizing Active Learning for Low Annotation Budgets [6.753808772846254]
In deep learning, active learning is usually implemented as an iterative process in which successive deep models are updated via fine tuning.
We tackle this issue by using an approach inspired by transfer learning.
We introduce a novel acquisition function which exploits the iterative nature of AL process to select samples in a more robust fashion.
arXiv Detail & Related papers (2022-01-18T18:53:10Z) - Active Learning at the ImageNet Scale [43.595076693347835]
In this work, we study a combination of active learning (AL) and pretraining (SSP) on ImageNet.
We find that performance on small toy datasets is not representative of performance on ImageNet due to the class imbalanced samples selected by an active learner.
We propose Balanced Selection (BASE), a simple, scalable AL algorithm that outperforms random sampling consistently.
arXiv Detail & Related papers (2021-11-25T02:48:51Z) - Visual Transformer for Task-aware Active Learning [49.903358393660724]
We present a novel pipeline for pool-based Active Learning.
Our method exploits accessible unlabelled examples during training to estimate their co-relation with the labelled examples.
Visual Transformer models non-local visual concept dependency between labelled and unlabelled examples.
arXiv Detail & Related papers (2021-06-07T17:13:59Z) - MetAL: Active Semi-Supervised Learning on Graphs via Meta Learning [2.903711704663904]
We propose MetAL, an AL approach that selects unlabeled instances that directly improve the future performance of a classification model.
We demonstrate that MetAL efficiently outperforms existing state-of-the-art AL algorithms.
arXiv Detail & Related papers (2020-07-22T06:59:49Z) - SCAN: Learning to Classify Images without Labels [73.69513783788622]
We advocate a two-step approach where feature learning and clustering are decoupled.
A self-supervised task from representation learning is employed to obtain semantically meaningful features.
We obtain promising results on ImageNet, and outperform several semi-supervised learning methods in the low-data regime.
arXiv Detail & Related papers (2020-05-25T18:12:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.