Zero-shot Active Learning Using Self Supervised Learning
- URL: http://arxiv.org/abs/2401.01690v1
- Date: Wed, 3 Jan 2024 11:49:07 GMT
- Title: Zero-shot Active Learning Using Self Supervised Learning
- Authors: Abhishek Sinha, Shreya Singh
- Abstract summary: We propose a new Active Learning approach which is model agnostic as well as one doesn't require an iterative process.
We aim to leverage self-supervised learnt features for the task of Active Learning.
- Score: 11.28415437676582
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning algorithms are often said to be data hungry. The performance of
such algorithms generally improve as more and more annotated data is fed into
the model. While collecting unlabelled data is easier (as they can be scraped
easily from the internet), annotating them is a tedious and expensive task.
Given a fixed budget available for data annotation, Active Learning helps
selecting the best subset of data for annotation, such that the deep learning
model when trained over that subset will have maximum generalization
performance under this budget. In this work, we aim to propose a new Active
Learning approach which is model agnostic as well as one doesn't require an
iterative process. We aim to leverage self-supervised learnt features for the
task of Active Learning. The benefit of self-supervised learning, is that one
can get useful feature representation of the input data, without having any
annotation.
Related papers
- Model Uncertainty based Active Learning on Tabular Data using Boosted
Trees [0.4667030429896303]
Supervised machine learning relies on the availability of good labelled data for model training.
Active learning is a sub-field of machine learning which helps in obtaining the labelled data efficiently.
arXiv Detail & Related papers (2023-10-30T14:29:53Z) - XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners [71.8257151788923]
We propose a novel Explainable Active Learning framework (XAL) for low-resource text classification.
XAL encourages classifiers to justify their inferences and delve into unlabeled data for which they cannot provide reasonable explanations.
Experiments on six datasets show that XAL achieves consistent improvement over 9 strong baselines.
arXiv Detail & Related papers (2023-10-09T08:07:04Z) - Language models are weak learners [71.33837923104808]
We show that prompt-based large language models can operate effectively as weak learners.
We incorporate these models into a boosting approach, which can leverage the knowledge within the model to outperform traditional tree-based boosting.
Results illustrate the potential for prompt-based LLMs to function not just as few-shot learners themselves, but as components of larger machine learning pipelines.
arXiv Detail & Related papers (2023-06-25T02:39:19Z) - Iterative Loop Learning Combining Self-Training and Active Learning for
Domain Adaptive Semantic Segmentation [1.827510863075184]
Self-training and active learning have been proposed to alleviate this problem.
This paper proposes an iterative loop learning method combining Self-Training and Active Learning.
arXiv Detail & Related papers (2023-01-31T01:31:43Z) - Active Learning for Abstractive Text Summarization [50.79416783266641]
We propose the first effective query strategy for Active Learning in abstractive text summarization.
We show that using our strategy in AL annotation helps to improve the model performance in terms of ROUGE and consistency scores.
arXiv Detail & Related papers (2023-01-09T10:33:14Z) - Active Learning with Weak Supervision for Gaussian Processes [12.408125305560274]
We propose an active learning algorithm that selects the precision of the annotation that is acquired.
We empirically demonstrate the gains of being able to adjust the annotation precision in the active learning loop.
arXiv Detail & Related papers (2022-04-18T14:27:31Z) - Optimizing Active Learning for Low Annotation Budgets [6.753808772846254]
In deep learning, active learning is usually implemented as an iterative process in which successive deep models are updated via fine tuning.
We tackle this issue by using an approach inspired by transfer learning.
We introduce a novel acquisition function which exploits the iterative nature of AL process to select samples in a more robust fashion.
arXiv Detail & Related papers (2022-01-18T18:53:10Z) - One-Round Active Learning [13.25385227263705]
One-round active learning aims to select a subset of unlabeled data points that achieve the highest utility after being labeled.
We propose DULO, a general framework for one-round active learning based on the notion of data utility functions.
Our results demonstrate that while existing active learning approaches could succeed with multiple rounds, DULO consistently performs better in the one-round setting.
arXiv Detail & Related papers (2021-04-23T23:59:50Z) - Low-Regret Active learning [64.36270166907788]
We develop an online learning algorithm for identifying unlabeled data points that are most informative for training.
At the core of our work is an efficient algorithm for sleeping experts that is tailored to achieve low regret on predictable (easy) instances.
arXiv Detail & Related papers (2021-04-06T22:53:45Z) - Confident Coreset for Active Learning in Medical Image Analysis [57.436224561482966]
We propose a novel active learning method, confident coreset, which considers both uncertainty and distribution for effectively selecting informative samples.
By comparative experiments on two medical image analysis tasks, we show that our method outperforms other active learning methods.
arXiv Detail & Related papers (2020-04-05T13:46:16Z) - Rethinking Few-Shot Image Classification: a Good Embedding Is All You
Need? [72.00712736992618]
We show that a simple baseline: learning a supervised or self-supervised representation on the meta-training set, outperforms state-of-the-art few-shot learning methods.
An additional boost can be achieved through the use of self-distillation.
We believe that our findings motivate a rethinking of few-shot image classification benchmarks and the associated role of meta-learning algorithms.
arXiv Detail & Related papers (2020-03-25T17:58:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.