Active Learning with a Noisy Annotator
- URL: http://arxiv.org/abs/2504.04506v1
- Date: Sun, 06 Apr 2025 14:27:27 GMT
- Title: Active Learning with a Noisy Annotator
- Authors: Netta Shafir, Guy Hacohen, Daphna Weinshall,
- Abstract summary: We propose a novel framework called Noise-Aware Active Sampling (NAS) to handle noisy annotations.<n>NAS identifies regions that remain uncovered due to the selection of noisy representatives and enables resampling from these areas.<n>NAS significantly improves performance for standard active learning methods across different noise types and rates.
- Score: 13.272510644778105
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Active Learning (AL) aims to reduce annotation costs by strategically selecting the most informative samples for labeling. However, most active learning methods struggle in the low-budget regime where only a few labeled examples are available. This issue becomes even more pronounced when annotators provide noisy labels. A common AL approach for the low- and mid-budget regimes focuses on maximizing the coverage of the labeled set across the entire dataset. We propose a novel framework called Noise-Aware Active Sampling (NAS) that extends existing greedy, coverage-based active learning strategies to handle noisy annotations. NAS identifies regions that remain uncovered due to the selection of noisy representatives and enables resampling from these areas. We introduce a simple yet effective noise filtering approach suitable for the low-budget regime, which leverages the inner mechanism of NAS and can be applied for noise filtering before model training. On multiple computer vision benchmarks, including CIFAR100 and ImageNet subsets, NAS significantly improves performance for standard active learning methods across different noise types and rates.
Related papers
- Hide and Seek in Noise Labels: Noise-Robust Collaborative Active Learning with LLM-Powered Assistance [17.359530437698723]
Learning from noisy labels (LNL) is a challenge that arises in many real-world scenarios where collected training data can contain incorrect or corrupted labels.<n>Most existing solutions identify noisy labels and adopt active learning to query human experts on them for denoising.<n>In this paper, we propose an innovative collaborative learning framework NoiseAL based on active learning to combine large language models with small models for learning from noisy labels.
arXiv Detail & Related papers (2025-04-03T04:36:39Z) - DIRECT: Deep Active Learning under Imbalance and Label Noise [15.571923343398657]
We conduct the first study of active learning under both class imbalance and label noise.
We propose a novel algorithm that robustly identifies the class separation threshold and annotates the most uncertain examples.
Our results demonstrate that DIRECT can save more than 60% of the annotation budget compared to state-of-art active learning algorithms.
arXiv Detail & Related papers (2023-12-14T18:18:34Z) - Improving a Named Entity Recognizer Trained on Noisy Data with a Few
Clean Instances [55.37242480995541]
We propose to denoise noisy NER data with guidance from a small set of clean instances.
Along with the main NER model we train a discriminator model and use its outputs to recalibrate the sample weights.
Results on public crowdsourcing and distant supervision datasets show that the proposed method can consistently improve performance with a small guidance set.
arXiv Detail & Related papers (2023-10-25T17:23:37Z) - Combating Label Noise With A General Surrogate Model For Sample Selection [77.45468386115306]
We propose to leverage the vision-language surrogate model CLIP to filter noisy samples automatically.<n>We validate the effectiveness of our proposed method on both real-world and synthetic noisy datasets.
arXiv Detail & Related papers (2023-10-16T14:43:27Z) - Towards Robust Few-shot Point Cloud Semantic Segmentation [57.075074484313]
Few-shot point cloud semantic segmentation aims to train a model to quickly adapt to new unseen classes with only a handful of support set samples.
We propose a Component-level Clean Noise Separation (CCNS) representation learning to learn discriminative feature representations.
We also propose a Multi-scale Degree-based Noise Suppression (MDNS) scheme to remove the noisy shots from the support set.
arXiv Detail & Related papers (2023-09-20T11:40:10Z) - Latent Class-Conditional Noise Model [54.56899309997246]
We introduce a Latent Class-Conditional Noise model (LCCN) to parameterize the noise transition under a Bayesian framework.
We then deduce a dynamic label regression method for LCCN, whose Gibbs sampler allows us efficiently infer the latent true labels.
Our approach safeguards the stable update of the noise transition, which avoids previous arbitrarily tuning from a mini-batch of samples.
arXiv Detail & Related papers (2023-02-19T15:24:37Z) - Active Learning Through a Covering Lens [7.952582509792972]
Deep active learning aims to reduce the annotation cost for deep neural networks.
We propose ProbCover, a new active learning algorithm for the low budget regime.
We show that our principled active learning strategy improves the state-of-the-art in the low-budget regime in several image recognition benchmarks.
arXiv Detail & Related papers (2022-05-23T14:03:23Z) - UNICON: Combating Label Noise Through Uniform Selection and Contrastive
Learning [89.56465237941013]
We propose UNICON, a simple yet effective sample selection method which is robust to high label noise.
We obtain an 11.4% improvement over the current state-of-the-art on CIFAR100 dataset with a 90% noise rate.
arXiv Detail & Related papers (2022-03-28T07:36:36Z) - Open-set Label Noise Can Improve Robustness Against Inherent Label Noise [27.885927200376386]
We show that open-set noisy labels can be non-toxic and even benefit the robustness against inherent noisy labels.
We propose a simple yet effective regularization by introducing Open-set samples with Dynamic Noisy Labels (ODNL) into training.
arXiv Detail & Related papers (2021-06-21T07:15:50Z) - Attention-Aware Noisy Label Learning for Image Classification [97.26664962498887]
Deep convolutional neural networks (CNNs) learned on large-scale labeled samples have achieved remarkable progress in computer vision.
The cheapest way to obtain a large body of labeled visual data is to crawl from websites with user-supplied labels, such as Flickr.
This paper proposes the attention-aware noisy label learning approach to improve the discriminative capability of the network trained on datasets with potential label noise.
arXiv Detail & Related papers (2020-09-30T15:45:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.