ALdataset: a benchmark for pool-based active learning
- URL: http://arxiv.org/abs/2010.08161v1
- Date: Fri, 16 Oct 2020 04:37:29 GMT
- Title: ALdataset: a benchmark for pool-based active learning
- Authors: Xueying Zhan, Antoni Bert Chan
- Abstract summary: Active learning (AL) is a subfield of machine learning (ML) in which a learning algorithm could achieve good accuracy with less training samples by interactively querying a user/oracle to label new data points.
Pool-based AL is well-motivated in many ML tasks, where unlabeled data is abundant, but their labels are hard to obtain.
We present experiment results for various active learning strategies, both recently proposed and classic highly-cited methods, and draw insights from the results.
- Score: 1.9308522511657449
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active learning (AL) is a subfield of machine learning (ML) in which a
learning algorithm could achieve good accuracy with less training samples by
interactively querying a user/oracle to label new data points. Pool-based AL is
well-motivated in many ML tasks, where unlabeled data is abundant, but their
labels are hard to obtain. Although many pool-based AL methods have been
developed, the lack of a comparative benchmarking and integration of techniques
makes it difficult to: 1) determine the current state-of-the-art technique; 2)
evaluate the relative benefit of new methods for various properties of the
dataset; 3) understand what specific problems merit greater attention; and 4)
measure the progress of the field over time. To conduct easier comparative
evaluation among AL methods, we present a benchmark task for pool-based active
learning, which consists of benchmarking datasets and quantitative metrics that
summarize overall performance. We present experiment results for various active
learning strategies, both recently proposed and classic highly-cited methods,
and draw insights from the results.
Related papers
- BAL: Balancing Diversity and Novelty for Active Learning [53.289700543331925]
We introduce a novel framework, Balancing Active Learning (BAL), which constructs adaptive sub-pools to balance diverse and uncertain data.
Our approach outperforms all established active learning methods on widely recognized benchmarks by 1.20%.
arXiv Detail & Related papers (2023-12-26T08:14:46Z) - Pool-Based Active Learning with Proper Topological Regions [5.5165579223151795]
Pool-based active learning methods are there to detect, among a set of unlabeled data, the ones that are the most relevant for the training.
We propose in this paper a meta-approach for pool-based active learning strategies in the context of multi-class classification tasks.
arXiv Detail & Related papers (2023-10-02T19:42:33Z) - A Critical Re-evaluation of Benchmark Datasets for (Deep) Learning-Based
Matching Algorithms [11.264467955516706]
We propose four approaches to assessing the difficulty and appropriateness of 13 established datasets.
We show that most of the popular datasets pose rather easy classification tasks.
We propose a new methodology for yielding benchmark datasets.
arXiv Detail & Related papers (2023-07-03T07:54:54Z) - Active Learning Principles for In-Context Learning with Large Language
Models [65.09970281795769]
This paper investigates how Active Learning algorithms can serve as effective demonstration selection methods for in-context learning.
We show that in-context example selection through AL prioritizes high-quality examples that exhibit low uncertainty and bear similarity to the test examples.
arXiv Detail & Related papers (2023-05-23T17:16:04Z) - Deep Active Learning with Contrastive Learning Under Realistic Data Pool
Assumptions [2.578242050187029]
Active learning aims to identify the most informative data from an unlabeled data pool that enables a model to reach the desired accuracy rapidly.
Most existing active learning methods have been evaluated in an ideal setting where only samples relevant to the target task exist in an unlabeled data pool.
We introduce new active learning benchmarks that include ambiguous, task-irrelevant out-of-distribution as well as in-distribution samples.
arXiv Detail & Related papers (2023-03-25T10:46:10Z) - Exploiting Diversity of Unlabeled Data for Label-Efficient
Semi-Supervised Active Learning [57.436224561482966]
Active learning is a research area that addresses the issues of expensive labeling by selecting the most important samples for labeling.
We introduce a new diversity-based initial dataset selection algorithm to select the most informative set of samples for initial labeling in the active learning setting.
Also, we propose a novel active learning query strategy, which uses diversity-based sampling on consistency-based embeddings.
arXiv Detail & Related papers (2022-07-25T16:11:55Z) - Pareto Optimization for Active Learning under Out-of-Distribution Data
Scenarios [79.02009938011447]
We propose a sampling scheme, which selects optimal subsets of unlabeled samples with fixed batch size from the unlabeled data pool.
Experimental results show its effectiveness on both classical Machine Learning (ML) and Deep Learning (DL) tasks.
arXiv Detail & Related papers (2022-07-04T04:11:44Z) - A Comparative Survey of Deep Active Learning [76.04825433362709]
Active Learning (AL) is a set of techniques for reducing labeling cost by sequentially selecting data samples from a large unlabeled data pool for labeling.
Deep Learning (DL) is data-hungry, and the performance of DL models scales monotonically with more training data.
In recent years, Deep Active Learning (DAL) has risen as feasible solutions for maximizing model performance while minimizing the expensive labeling cost.
arXiv Detail & Related papers (2022-03-25T05:17:24Z) - Semi-supervised Batch Active Learning via Bilevel Optimization [89.37476066973336]
We formulate our approach as a data summarization problem via bilevel optimization.
We show that our method is highly effective in keyword detection tasks in the regime when only few labeled samples are available.
arXiv Detail & Related papers (2020-10-19T16:53:24Z) - A Comprehensive Benchmark Framework for Active Learning Methods in
Entity Matching [17.064993611446898]
In this paper, we build a unified active learning benchmark framework for EM.
The goal of the framework is to enable concrete guidelines for practitioners as to what active learning combinations will work well for EM.
Our framework also includes novel optimizations that improve the quality of the learned model by roughly 9% in terms of F1-score and reduce example selection latencies by up to 10x without affecting the quality of the model.
arXiv Detail & Related papers (2020-03-29T19:08:03Z) - Fase-AL -- Adaptation of Fast Adaptive Stacking of Ensembles for
Supporting Active Learning [0.0]
This work presents the FASE-AL algorithm which induces classification models with non-labeled instances using Active Learning.
The algorithm achieves promising results in terms of the percentage of correctly classified instances.
arXiv Detail & Related papers (2020-01-30T17:25:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.