Inconsistency-Based Data-Centric Active Open-Set Annotation
- URL: http://arxiv.org/abs/2401.04923v1
- Date: Wed, 10 Jan 2024 04:18:02 GMT
- Title: Inconsistency-Based Data-Centric Active Open-Set Annotation
- Authors: Ruiyu Mao, Ouyang Xu, Yunhui Guo
- Abstract summary: NEAT is a data-centric active learning method that actively annotates open-set data.
NEAT achieves significantly better performance than state-of-the-art active learning methods for active open-set annotation.
- Score: 6.652785290214744
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Active learning is a commonly used approach that reduces the labeling effort
required to train deep neural networks. However, the effectiveness of current
active learning methods is limited by their closed-world assumptions, which
assume that all data in the unlabeled pool comes from a set of predefined known
classes. This assumption is often not valid in practical situations, as there
may be unknown classes in the unlabeled data, leading to the active open-set
annotation problem. The presence of unknown classes in the data can
significantly impact the performance of existing active learning methods due to
the uncertainty they introduce. To address this issue, we propose a novel
data-centric active learning method called NEAT that actively annotates
open-set data. NEAT is designed to label known classes data from a pool of both
known and unknown classes unlabeled data. It utilizes the clusterability of
labels to identify the known classes from the unlabeled pool and selects
informative samples from those classes based on a consistency criterion that
measures inconsistencies between model predictions and local feature
distribution. Unlike the recently proposed learning-centric method for the same
problem, NEAT is much more computationally efficient and is a data-centric
active open-set annotation method. Our experiments demonstrate that NEAT
achieves significantly better performance than state-of-the-art active learning
methods for active open-set annotation.
Related papers
- XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners [71.8257151788923]
We propose a novel Explainable Active Learning framework (XAL) for low-resource text classification.
XAL encourages classifiers to justify their inferences and delve into unlabeled data for which they cannot provide reasonable explanations.
Experiments on six datasets show that XAL achieves consistent improvement over 9 strong baselines.
arXiv Detail & Related papers (2023-10-09T08:07:04Z) - Soft Curriculum for Learning Conditional GANs with Noisy-Labeled and
Uncurated Unlabeled Data [70.25049762295193]
We introduce a novel conditional image generation framework that accepts noisy-labeled and uncurated data during training.
We propose soft curriculum learning, which assigns instance-wise weights for adversarial training while assigning new labels for unlabeled data.
Our experiments show that our approach outperforms existing semi-supervised and label-noise robust methods in terms of both quantitative and qualitative performance.
arXiv Detail & Related papers (2023-07-17T08:31:59Z) - Adaptive Negative Evidential Deep Learning for Open-set Semi-supervised Learning [69.81438976273866]
Open-set semi-supervised learning (Open-set SSL) considers a more practical scenario, where unlabeled data and test data contain new categories (outliers) not observed in labeled data (inliers)
We introduce evidential deep learning (EDL) as an outlier detector to quantify different types of uncertainty, and design different uncertainty metrics for self-training and inference.
We propose a novel adaptive negative optimization strategy, making EDL more tailored to the unlabeled dataset containing both inliers and outliers.
arXiv Detail & Related papers (2023-03-21T09:07:15Z) - Reinforced Meta Active Learning [11.913086438671357]
We present an online stream-based meta active learning method which learns on the fly an informativeness measure directly from the data.
The method is based on reinforcement learning and combines episodic policy search and a contextual bandits approach.
We demonstrate on several real datasets that this method learns to select training samples more efficiently than existing state-of-the-art methods.
arXiv Detail & Related papers (2022-03-09T08:36:54Z) - Active Learning for Open-set Annotation [38.739845944840454]
We propose a new active learning framework called LfOSA, which boosts the classification performance with an effective sampling strategy to precisely detect examples from known classes for annotation.
The experimental results show that the proposed method can significantly improve the selection quality of known classes, and achieve higher classification accuracy with lower annotation cost than state-of-the-art active learning methods.
arXiv Detail & Related papers (2022-01-18T06:11:51Z) - OpenCoS: Contrastive Semi-supervised Learning for Handling Open-set
Unlabeled Data [65.19205979542305]
Unlabeled data may include out-of-class samples in practice.
OpenCoS is a method for handling this realistic semi-supervised learning scenario.
arXiv Detail & Related papers (2021-06-29T06:10:05Z) - Data Shapley Valuation for Efficient Batch Active Learning [21.76249748709411]
Active Data Shapley (ADS) is a filtering layer for batch active learning.
We show that ADS is particularly effective when the pool of unlabeled data exhibits real-world caveats.
arXiv Detail & Related papers (2021-04-16T18:53:42Z) - A Novel Perspective for Positive-Unlabeled Learning via Noisy Labels [49.990938653249415]
This research presents a methodology that assigns initial pseudo-labels to unlabeled data which is used as noisy-labeled data, and trains a deep neural network using the noisy-labeled data.
Experimental results demonstrate that the proposed method significantly outperforms the state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-03-08T11:46:02Z) - Active Learning Under Malicious Mislabeling and Poisoning Attacks [2.4660652494309936]
Deep neural networks usually require large labeled datasets for training.
Most of these data are unlabeled and are vulnerable to data poisoning attacks.
In this paper, we develop an efficient active learning method that requires fewer labeled instances.
arXiv Detail & Related papers (2021-01-01T03:43:36Z) - Active Learning for Node Classification: The Additional Learning Ability
from Unlabelled Nodes [33.97571297149204]
Given a limited labelling budget, active learning aims to improve performance by carefully choosing which nodes to label.
Our empirical study shows that existing active learning methods for node classification are considerably outperformed by a simple method.
We propose a novel latent space clustering-based active learning method for node classification (LSCALE)
arXiv Detail & Related papers (2020-12-13T13:59:48Z) - Exploratory Machine Learning with Unknown Unknowns [60.78953456742171]
We study a new problem setting in which there are unknown classes in the training data misperceived as other labels.
We propose the exploratory machine learning, which examines and investigates training data by actively augmenting the feature space to discover potentially hidden classes.
arXiv Detail & Related papers (2020-02-05T02:06:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.