Learning Confident Classifiers in the Presence of Label Noise
- URL: http://arxiv.org/abs/2301.00524v2
- Date: Sat, 9 Dec 2023 07:55:28 GMT
- Title: Learning Confident Classifiers in the Presence of Label Noise
- Authors: Asma Ahmed Hashmi, Aigerim Zhumabayeva, Nikita Kotelevskii, Artem
Agafonov, Mohammad Yaqub, Maxim Panov and Martin Tak\'a\v{c}
- Abstract summary: This paper proposes a probabilistic model for noisy observations that allows us to build a confident classification and segmentation models.
Our experiments show that our algorithm outperforms state-of-the-art solutions for the considered classification and segmentation problems.
- Score: 5.829762367794509
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The success of Deep Neural Network (DNN) models significantly depends on the
quality of provided annotations. In medical image segmentation, for example,
having multiple expert annotations for each data point is common to minimize
subjective annotation bias. Then, the goal of estimation is to filter out the
label noise and recover the ground-truth masks, which are not explicitly given.
This paper proposes a probabilistic model for noisy observations that allows us
to build a confident classification and segmentation models. To accomplish it,
we explicitly model label noise and introduce a new information-based
regularization that pushes the network to recover the ground-truth labels. In
addition, for segmentation task we adjust the loss function by prioritizing
learning in high-confidence regions where all the annotators agree on labeling.
We evaluate the proposed method on a series of classification tasks such as
noisy versions of MNIST, CIFAR-10, Fashion-MNIST datasets as well as CIFAR-10N,
which is real-world dataset with noisy human annotations. Additionally, for
segmentation task, we consider several medical imaging datasets, such as, LIDC
and RIGA that reflect real-world inter-variability among multiple annotators.
Our experiments show that our algorithm outperforms state-of-the-art solutions
for the considered classification and segmentation problems.
Related papers
- Benchmarking Label Noise in Instance Segmentation: Spatial Noise Matters [2.53740603524637]
This study sheds light on the quality of segmentation masks produced by various models.
It challenges the efficacy of popular methods designed to address learning with label noise.
arXiv Detail & Related papers (2024-06-16T10:49:23Z) - Noisy Label Processing for Classification: A Survey [2.8821062918162146]
In the long, tedious process of data annotation, annotators are prone to make mistakes, resulting in incorrect labels of images.
It is crucial to combat noisy labels for computer vision tasks, especially for classification tasks.
We propose an algorithm to generate a synthetic label noise pattern guided by real-world data.
arXiv Detail & Related papers (2024-04-05T15:11:09Z) - Fusing Conditional Submodular GAN and Programmatic Weak Supervision [5.300742881753571]
Programmatic Weak Supervision (PWS) and generative models serve as crucial tools to maximize the utility of existing datasets without resorting to data gathering and manual annotation processes.
PWS uses various weak supervision techniques to estimate the underlying class labels of data, while generative models primarily concentrate on sampling from the underlying distribution of the given dataset.
Recently, WSGAN proposed a mechanism to fuse these two models.
arXiv Detail & Related papers (2023-12-16T07:49:13Z) - Fine-grained Recognition with Learnable Semantic Data Augmentation [68.48892326854494]
Fine-grained image recognition is a longstanding computer vision challenge.
We propose diversifying the training data at the feature-level to alleviate the discriminative region loss problem.
Our method significantly improves the generalization performance on several popular classification networks.
arXiv Detail & Related papers (2023-09-01T11:15:50Z) - Compensation Learning in Semantic Segmentation [22.105356244579745]
We propose Compensation Learning in Semanticscapes, a framework to identify and compensate ambiguities as well as label noise.
We introduce a novel uncertainty branch for neural networks to induce the compensation bias only to relevant regions.
Our method is employed into state-of-the-art segmentation frameworks and several experiments demonstrate that our proposed compensation learns inter-class relations.
arXiv Detail & Related papers (2023-04-26T10:26:11Z) - Learning advisor networks for noisy image classification [22.77447144331876]
We introduce the novel concept of advisor network to address the problem of noisy labels in image classification.
We trained it with a meta-learning strategy so that it can adapt throughout the training of the main model.
We tested our method on CIFAR10 and CIFAR100 with synthetic noise, and on Clothing1M which contains real-world noise, reporting state-of-the-art results.
arXiv Detail & Related papers (2022-11-08T11:44:08Z) - Learning with Neighbor Consistency for Noisy Labels [69.83857578836769]
We present a method for learning from noisy labels that leverages similarities between training examples in feature space.
We evaluate our method on datasets evaluating both synthetic (CIFAR-10, CIFAR-100) and realistic (mini-WebVision, Clothing1M, mini-ImageNet-Red) noise.
arXiv Detail & Related papers (2022-02-04T15:46:27Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - SCARF: Self-Supervised Contrastive Learning using Random Feature
Corruption [72.35532598131176]
We propose SCARF, a technique for contrastive learning, where views are formed by corrupting a random subset of features.
We show that SCARF complements existing strategies and outperforms alternatives like autoencoders.
arXiv Detail & Related papers (2021-06-29T08:08:33Z) - Tackling Instance-Dependent Label Noise via a Universal Probabilistic
Model [80.91927573604438]
This paper proposes a simple yet universal probabilistic model, which explicitly relates noisy labels to their instances.
Experiments on datasets with both synthetic and real-world label noise verify that the proposed method yields significant improvements on robustness.
arXiv Detail & Related papers (2021-01-14T05:43:51Z) - Attention-Aware Noisy Label Learning for Image Classification [97.26664962498887]
Deep convolutional neural networks (CNNs) learned on large-scale labeled samples have achieved remarkable progress in computer vision.
The cheapest way to obtain a large body of labeled visual data is to crawl from websites with user-supplied labels, such as Flickr.
This paper proposes the attention-aware noisy label learning approach to improve the discriminative capability of the network trained on datasets with potential label noise.
arXiv Detail & Related papers (2020-09-30T15:45:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.