Probabilistic Active Learning for Active Class Selection
- URL: http://arxiv.org/abs/2108.03891v1
- Date: Mon, 9 Aug 2021 09:20:19 GMT
- Title: Probabilistic Active Learning for Active Class Selection
- Authors: Daniel Kottke, Georg Krempl, Marianne Stecklina, Cornelius Styp von
Rekowski, Tim Sabsch, Tuan Pham Minh, Matthias Deliano, Myra Spiliopoulou,
Bernhard Sick
- Abstract summary: In machine learning, active class selection (ACS) algorithms aim to actively select a class and ask the oracle to provide an instance for that class.
We propose a new algorithm (PAL-ACS) that transforms the ACS problem into an active learning task by introducing pseudo instances.
- Score: 3.6471065658293043
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In machine learning, active class selection (ACS) algorithms aim to actively
select a class and ask the oracle to provide an instance for that class to
optimize a classifier's performance while minimizing the number of requests. In
this paper, we propose a new algorithm (PAL-ACS) that transforms the ACS
problem into an active learning task by introducing pseudo instances. These are
used to estimate the usefulness of an upcoming instance for each class using
the performance gain model from probabilistic active learning. Our experimental
evaluation (on synthetic and real data) shows the advantages of our algorithm
compared to state-of-the-art algorithms. It effectively prefers the sampling of
difficult classes and thereby improves the classification performance.
Related papers
- Comparative Analysis of Demonstration Selection Algorithms for LLM In-Context Learning [18.58278188791548]
In-context learning can help Large Language Models (LLMs) to adapt new tasks without additional training.
Despite all the proposed demonstration selection algorithms, efficiency and effectiveness remain unclear.
This lack of clarity makes it difficult to apply these algorithms in real-world scenarios.
arXiv Detail & Related papers (2024-10-30T15:11:58Z) - MALADY: Multiclass Active Learning with Auction Dynamics on Graphs [0.9831489366502301]
We introduce the Multiclass Active Learning with Auction Dynamics on Graphs (MALADY) framework for efficient active learning.
We generalize the auction dynamics algorithm on similarity graphs for semi-supervised learning in [24] to incorporate a more general optimization functional.
We also introduce a novel active learning acquisition function that uses the dual variable of the auction algorithm to measure the uncertainty in the classifier to prioritize queries near the decision boundaries between different classes.
arXiv Detail & Related papers (2024-09-14T16:20:26Z) - The Role of Learning Algorithms in Collective Action [8.955918346078935]
We show that the effective size and success of a collective are highly dependent on the properties of the learning algorithm.
This highlights the necessity of taking the learning algorithm into account when studying the impact of collective action in machine learning.
arXiv Detail & Related papers (2024-05-10T16:36:59Z) - Provably Efficient Representation Learning with Tractable Planning in
Low-Rank POMDP [81.00800920928621]
We study representation learning in partially observable Markov Decision Processes (POMDPs)
We first present an algorithm for decodable POMDPs that combines maximum likelihood estimation (MLE) and optimism in the face of uncertainty (OFU)
We then show how to adapt this algorithm to also work in the broader class of $gamma$-observable POMDPs.
arXiv Detail & Related papers (2023-06-21T16:04:03Z) - Active Learning Principles for In-Context Learning with Large Language
Models [65.09970281795769]
This paper investigates how Active Learning algorithms can serve as effective demonstration selection methods for in-context learning.
We show that in-context example selection through AL prioritizes high-quality examples that exhibit low uncertainty and bear similarity to the test examples.
arXiv Detail & Related papers (2023-05-23T17:16:04Z) - Algorithm Selection for Deep Active Learning with Imbalanced Datasets [11.902019233549474]
Active learning aims to reduce the number of labeled examples needed to train deep networks.
It is difficult to know in advance which active learning strategy will perform well or best in a given application.
We propose the first adaptive algorithm selection strategy for deep active learning.
arXiv Detail & Related papers (2023-02-14T19:59:49Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - ALBench: A Framework for Evaluating Active Learning in Object Detection [102.81795062493536]
This paper contributes an active learning benchmark framework named as ALBench for evaluating active learning in object detection.
Developed on an automatic deep model training system, this ALBench framework is easy-to-use, compatible with different active learning algorithms, and ensures the same training and testing protocols.
arXiv Detail & Related papers (2022-07-27T07:46:23Z) - Towards Diverse Evaluation of Class Incremental Learning: A Representation Learning Perspective [67.45111837188685]
Class incremental learning (CIL) algorithms aim to continually learn new object classes from incrementally arriving data.
We experimentally analyze neural network models trained by CIL algorithms using various evaluation protocols in representation learning.
arXiv Detail & Related papers (2022-06-16T11:44:11Z) - Learning to Select Base Classes for Few-shot Classification [96.92372639495551]
We use the Similarity Ratio as an indicator for the generalization performance of a few-shot model.
We then formulate the base class selection problem as a submodular optimization problem over Similarity Ratio.
arXiv Detail & Related papers (2020-04-01T09:55:18Z) - Fase-AL -- Adaptation of Fast Adaptive Stacking of Ensembles for
Supporting Active Learning [0.0]
This work presents the FASE-AL algorithm which induces classification models with non-labeled instances using Active Learning.
The algorithm achieves promising results in terms of the percentage of correctly classified instances.
arXiv Detail & Related papers (2020-01-30T17:25:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.