Learning to Rank for Active Learning: A Listwise Approach
- URL: http://arxiv.org/abs/2008.00078v2
- Date: Sat, 17 Oct 2020 21:47:34 GMT
- Title: Learning to Rank for Active Learning: A Listwise Approach
- Authors: Minghan Li, Xialei Liu, Joost van de Weijer, Bogdan Raducanu
- Abstract summary: Active learning emerged as an alternative to alleviate the effort to label huge amount of data for data hungry applications.
In this work, we rethink the structure of the loss prediction module, using a simple but effective listwise approach.
Experimental results on four datasets demonstrate that our method outperforms recent state-of-the-art active learning approaches for both image classification and regression tasks.
- Score: 36.72443179449176
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active learning emerged as an alternative to alleviate the effort to label
huge amount of data for data hungry applications (such as image/video indexing
and retrieval, autonomous driving, etc.). The goal of active learning is to
automatically select a number of unlabeled samples for annotation (according to
a budget), based on an acquisition function, which indicates how valuable a
sample is for training the model. The learning loss method is a task-agnostic
approach which attaches a module to learn to predict the target loss of
unlabeled data, and select data with the highest loss for labeling. In this
work, we follow this strategy but we define the acquisition function as a
learning to rank problem and rethink the structure of the loss prediction
module, using a simple but effective listwise approach. Experimental results on
four datasets demonstrate that our method outperforms recent state-of-the-art
active learning approaches for both image classification and regression tasks.
Related papers
- Semi-Supervised Variational Adversarial Active Learning via Learning to Rank and Agreement-Based Pseudo Labeling [6.771578432805963]
Active learning aims to alleviate the amount of labor involved in data labeling by automating the selection of unlabeled samples.
We introduce novel techniques that significantly improve the use of abundant unlabeled data during training.
We demonstrate the superior performance of our approach over the state of the art on various image classification and segmentation benchmark datasets.
arXiv Detail & Related papers (2024-08-23T00:35:07Z) - Querying Easily Flip-flopped Samples for Deep Active Learning [63.62397322172216]
Active learning is a machine learning paradigm that aims to improve the performance of a model by strategically selecting and querying unlabeled data.
One effective selection strategy is to base it on the model's predictive uncertainty, which can be interpreted as a measure of how informative a sample is.
This paper proposes the it least disagree metric (LDM) as the smallest probability of disagreement of the predicted label.
arXiv Detail & Related papers (2024-01-18T08:12:23Z) - Zero-shot Active Learning Using Self Supervised Learning [11.28415437676582]
We propose a new Active Learning approach which is model agnostic as well as one doesn't require an iterative process.
We aim to leverage self-supervised learnt features for the task of Active Learning.
arXiv Detail & Related papers (2024-01-03T11:49:07Z) - Model Uncertainty based Active Learning on Tabular Data using Boosted
Trees [0.4667030429896303]
Supervised machine learning relies on the availability of good labelled data for model training.
Active learning is a sub-field of machine learning which helps in obtaining the labelled data efficiently.
arXiv Detail & Related papers (2023-10-30T14:29:53Z) - MoBYv2AL: Self-supervised Active Learning for Image Classification [57.4372176671293]
We present MoBYv2AL, a novel self-supervised active learning framework for image classification.
Our contribution lies in lifting MoBY, one of the most successful self-supervised learning algorithms, to the AL pipeline.
We achieve state-of-the-art results when compared to recent AL methods.
arXiv Detail & Related papers (2023-01-04T10:52:02Z) - Temporal Output Discrepancy for Loss Estimation-based Active Learning [65.93767110342502]
We present a novel deep active learning approach that queries the oracle for data annotation when the unlabeled sample is believed to incorporate high loss.
Our approach achieves superior performances than the state-of-the-art active learning methods on image classification and semantic segmentation tasks.
arXiv Detail & Related papers (2022-12-20T19:29:37Z) - An Embarrassingly Simple Approach to Semi-Supervised Few-Shot Learning [58.59343434538218]
We propose a simple but quite effective approach to predict accurate negative pseudo-labels of unlabeled data from an indirect learning perspective.
Our approach can be implemented in just few lines of code by only using off-the-shelf operations.
arXiv Detail & Related papers (2022-09-28T02:11:34Z) - Reinforced Meta Active Learning [11.913086438671357]
We present an online stream-based meta active learning method which learns on the fly an informativeness measure directly from the data.
The method is based on reinforcement learning and combines episodic policy search and a contextual bandits approach.
We demonstrate on several real datasets that this method learns to select training samples more efficiently than existing state-of-the-art methods.
arXiv Detail & Related papers (2022-03-09T08:36:54Z) - Budget-aware Few-shot Learning via Graph Convolutional Network [56.41899553037247]
This paper tackles the problem of few-shot learning, which aims to learn new visual concepts from a few examples.
A common problem setting in few-shot classification assumes random sampling strategy in acquiring data labels.
We introduce a new budget-aware few-shot learning problem that aims to learn novel object categories.
arXiv Detail & Related papers (2022-01-07T02:46:35Z) - Semi-supervised Batch Active Learning via Bilevel Optimization [89.37476066973336]
We formulate our approach as a data summarization problem via bilevel optimization.
We show that our method is highly effective in keyword detection tasks in the regime when only few labeled samples are available.
arXiv Detail & Related papers (2020-10-19T16:53:24Z) - A Graph-Based Approach for Active Learning in Regression [37.42533189350655]
Active learning aims to reduce labeling efforts by selectively asking humans to annotate the most important data points from an unlabeled pool.
Most existing active learning for regression methods use the regression function learned at each active learning iteration to select the next informative point to query.
We propose a feature-focused approach that formulates both sequential and batch-mode active regression as a novel bipartite graph optimization problem.
arXiv Detail & Related papers (2020-01-30T00:59:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.