Rethinking Noisy Label Models: Labeler-Dependent Noise with Adversarial
Awareness
- URL: http://arxiv.org/abs/2105.14083v1
- Date: Fri, 28 May 2021 19:58:18 GMT
- Title: Rethinking Noisy Label Models: Labeler-Dependent Noise with Adversarial
Awareness
- Authors: Glenn Dawson, Robi Polikar
- Abstract summary: We propose a principled model of label noise that generalizes instance-dependent noise to multiple labelers.
Under our labeler-dependent model, label noise manifests itself under two modalities: natural error of good-faith labelers, and adversarial labels provided by malicious actors.
We present two adversarial attack vectors that more accurately reflect the label noise that may be encountered in real-world settings.
- Score: 2.1930130356902207
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most studies on learning from noisy labels rely on unrealistic models of
i.i.d. label noise, such as class-conditional transition matrices. More recent
work on instance-dependent noise models are more realistic, but assume a single
generative process for label noise across the entire dataset. We propose a more
principled model of label noise that generalizes instance-dependent noise to
multiple labelers, based on the observation that modern datasets are typically
annotated using distributed crowdsourcing methods. Under our labeler-dependent
model, label noise manifests itself under two modalities: natural error of
good-faith labelers, and adversarial labels provided by malicious actors. We
present two adversarial attack vectors that more accurately reflect the label
noise that may be encountered in real-world settings, and demonstrate that
under our multimodal noisy labels model, state-of-the-art approaches for
learning from noisy labels are defeated by adversarial label attacks. Finally,
we propose a multi-stage, labeler-aware, model-agnostic framework that reliably
filters noisy labels by leveraging knowledge about which data partitions were
labeled by which labeler, and show that our proposed framework remains robust
even in the presence of extreme adversarial label noise.
Related papers
- Rethinking the Value of Labels for Instance-Dependent Label Noise
Learning [43.481591776038144]
noisy labels in real-world applications often depend on both the true label and the features.
In this work, we tackle instance-dependent label noise with a novel deep generative model that avoids explicitly modeling the noise transition matrix.
Our algorithm leverages casual representation learning and simultaneously identifies the high-level content and style latent factors from the data.
arXiv Detail & Related papers (2023-05-10T15:29:07Z) - Bridging the Gap between Model Explanations in Partially Annotated
Multi-label Classification [85.76130799062379]
We study how false negative labels affect the model's explanation.
We propose to boost the attribution scores of the model trained with partial labels to make its explanation resemble that of the model trained with full labels.
arXiv Detail & Related papers (2023-04-04T14:00:59Z) - Category-Adaptive Label Discovery and Noise Rejection for Multi-label
Image Recognition with Partial Positive Labels [78.88007892742438]
Training multi-label models with partial positive labels (MLR-PPL) attracts increasing attention.
Previous works regard unknown labels as negative and adopt traditional MLR algorithms.
We propose to explore semantic correlation among different images to facilitate the MLR-PPL task.
arXiv Detail & Related papers (2022-11-15T02:11:20Z) - Learning with Noisy Labels Revisited: A Study Using Real-World Human
Annotations [54.400167806154535]
Existing research on learning with noisy labels mainly focuses on synthetic label noise.
This work presents two new benchmark datasets (CIFAR-10N, CIFAR-100N)
We show that real-world noisy labels follow an instance-dependent pattern rather than the classically adopted class-dependent ones.
arXiv Detail & Related papers (2021-10-22T22:42:11Z) - A Realistic Simulation Framework for Learning with Label Noise [17.14439597393087]
We show that this framework generates synthetic noisy labels that exhibit important characteristics of the label noise.
We also benchmark several existing algorithms for learning with noisy labels.
We propose a new technique, Label Quality Model (LQM), that leverages annotator features to predict and correct against noisy labels.
arXiv Detail & Related papers (2021-07-23T18:53:53Z) - Learning with Feature-Dependent Label Noise: A Progressive Approach [19.425199841491246]
We propose a new family of feature-dependent label noise, which is much more general than commonly used i.i.d. label noise.
We provide theoretical guarantees showing that for a wide variety of (unknown) noise patterns, a classifier trained with this strategy converges to be consistent with the Bayes classifier.
arXiv Detail & Related papers (2021-03-13T17:34:22Z) - Tackling Instance-Dependent Label Noise via a Universal Probabilistic
Model [80.91927573604438]
This paper proposes a simple yet universal probabilistic model, which explicitly relates noisy labels to their instances.
Experiments on datasets with both synthetic and real-world label noise verify that the proposed method yields significant improvements on robustness.
arXiv Detail & Related papers (2021-01-14T05:43:51Z) - A Second-Order Approach to Learning with Instance-Dependent Label Noise [58.555527517928596]
The presence of label noise often misleads the training of deep neural networks.
We show that the errors in human-annotated labels are more likely to be dependent on the difficulty levels of tasks.
arXiv Detail & Related papers (2020-12-22T06:36:58Z) - Extended T: Learning with Mixed Closed-set and Open-set Noisy Labels [86.5943044285146]
The label noise transition matrix $T$ reflects the probabilities that true labels flip into noisy ones.
In this paper, we focus on learning under the mixed closed-set and open-set label noise.
Our method can better model the mixed label noise, following its more robust performance than the prior state-of-the-art label-noise learning methods.
arXiv Detail & Related papers (2020-12-02T02:42:45Z) - Label Noise Types and Their Effects on Deep Learning [0.0]
In this work, we provide a detailed analysis of the effects of different kinds of label noise on learning.
We propose a generic framework to generate feature-dependent label noise, which we show to be the most challenging case for learning.
For the ease of other researchers to test their algorithms with noisy labels, we share corrupted labels for the most commonly used benchmark datasets.
arXiv Detail & Related papers (2020-03-23T18:03:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.