Multi-Label Gold Asymmetric Loss Correction with Single-Label Regulators
- URL: http://arxiv.org/abs/2108.02032v1
- Date: Wed, 4 Aug 2021 12:57:29 GMT
- Title: Multi-Label Gold Asymmetric Loss Correction with Single-Label Regulators
- Authors: Cosmin Octavian Pene, Amirmasoud Ghiassi, Taraneh Younesian, Robert
Birke, Lydia Y.Chen
- Abstract summary: We propose a novel Gold Asymmetric Loss Correction with Single-Label Regulators (GALC-SLR) that operates robust against noisy labels.
GALC-SLR estimates the noise confusion matrix using single-label samples, then constructs an asymmetric loss correction via estimated confusion matrix to avoid overfitting to the noisy labels.
Empirical results show that our method outperforms the state-of-the-art original asymmetric loss multi-label classifier under all corruption levels.
- Score: 6.129273021888717
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-label learning is an emerging extension of the multi-class
classification where an image contains multiple labels. Not only acquiring a
clean and fully labeled dataset in multi-label learning is extremely expensive,
but also many of the actual labels are corrupted or missing due to the
automated or non-expert annotation techniques. Noisy label data decrease the
prediction performance drastically. In this paper, we propose a novel Gold
Asymmetric Loss Correction with Single-Label Regulators (GALC-SLR) that
operates robust against noisy labels. GALC-SLR estimates the noise confusion
matrix using single-label samples, then constructs an asymmetric loss
correction via estimated confusion matrix to avoid overfitting to the noisy
labels. Empirical results show that our method outperforms the state-of-the-art
original asymmetric loss multi-label classifier under all corruption levels,
showing mean average precision improvement up to 28.67% on a real world dataset
of MS-COCO, yielding a better generalization of the unseen data and increased
prediction performance.
Related papers
- Toward Robustness in Multi-label Classification: A Data Augmentation
Strategy against Imbalance and Noise [31.917931364881625]
Multi-label classification poses challenges due to imbalanced and noisy labels in training data.
We propose a unified data augmentation method, named BalanceMix, to address these challenges.
Our approach includes two samplers for imbalanced labels, generating minority-augmented instances with high diversity.
arXiv Detail & Related papers (2023-12-12T09:09:45Z) - Complementary to Multiple Labels: A Correlation-Aware Correction
Approach [65.59584909436259]
We show theoretically how the estimated transition matrix in multi-class CLL could be distorted in multi-labeled cases.
We propose a two-step method to estimate the transition matrix from candidate labels.
arXiv Detail & Related papers (2023-02-25T04:48:48Z) - Learning from Noisy Labels with Decoupled Meta Label Purifier [33.87292143223425]
Training deep neural networks with noisy labels is challenging since DNN can easily memorize inaccurate labels.
In this paper, we propose a novel multi-stage label purifier named DMLP.
DMLP decouples the label correction process into label-free representation learning and a simple meta label purifier.
arXiv Detail & Related papers (2023-02-14T03:39:30Z) - Exploiting Completeness and Uncertainty of Pseudo Labels for Weakly
Supervised Video Anomaly Detection [149.23913018423022]
Weakly supervised video anomaly detection aims to identify abnormal events in videos using only video-level labels.
Two-stage self-training methods have achieved significant improvements by self-generating pseudo labels.
We propose an enhancement framework by exploiting completeness and uncertainty properties for effective self-training.
arXiv Detail & Related papers (2022-12-08T05:53:53Z) - Acknowledging the Unknown for Multi-label Learning with Single Positive
Labels [65.5889334964149]
Traditionally, all unannotated labels are assumed as negative labels in single positive multi-label learning (SPML)
We propose entropy-maximization (EM) loss to maximize the entropy of predicted probabilities for all unannotated labels.
Considering the positive-negative label imbalance of unannotated labels, we propose asymmetric pseudo-labeling (APL) with asymmetric-tolerance strategies and a self-paced procedure to provide more precise supervision.
arXiv Detail & Related papers (2022-03-30T11:43:59Z) - Two Wrongs Don't Make a Right: Combating Confirmation Bias in Learning
with Label Noise [6.303101074386922]
Robust Label Refurbishment (Robust LR) is a new hybrid method that integrates pseudo-labeling and confidence estimation techniques to refurbish noisy labels.
We show that our method successfully alleviates the damage of both label noise and confirmation bias.
For example, Robust LR achieves up to 4.5% absolute top-1 accuracy improvement over the previous best on the real-world noisy dataset WebVision.
arXiv Detail & Related papers (2021-12-06T12:10:17Z) - Learning with Noisy Labels by Efficient Transition Matrix Estimation to
Combat Label Miscorrection [3.48062110627933]
Recent studies on learning with noisy labels have shown remarkable performance by exploiting a small clean dataset.
Model meta-learning-based label correction methods further improve performance by correcting noisy labels on the fly.
However, there is no safeguard on the label miscorrection, resulting in unavoidable performance degradation.
We propose a robust and efficient method that learns a label transition matrix on the fly.
arXiv Detail & Related papers (2021-11-29T20:12:17Z) - S3: Supervised Self-supervised Learning under Label Noise [53.02249460567745]
In this paper we address the problem of classification in the presence of label noise.
In the heart of our method is a sample selection mechanism that relies on the consistency between the annotated label of a sample and the distribution of the labels in its neighborhood in the feature space.
Our method significantly surpasses previous methods on both CIFARCIFAR100 with artificial noise and real-world noisy datasets such as WebVision and ANIMAL-10N.
arXiv Detail & Related papers (2021-11-22T15:49:20Z) - Disentangling Sampling and Labeling Bias for Learning in Large-Output
Spaces [64.23172847182109]
We show that different negative sampling schemes implicitly trade-off performance on dominant versus rare labels.
We provide a unified means to explicitly tackle both sampling bias, arising from working with a subset of all labels, and labeling bias, which is inherent to the data due to label imbalance.
arXiv Detail & Related papers (2021-05-12T15:40:13Z) - In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label
Selection Framework for Semi-Supervised Learning [53.1047775185362]
Pseudo-labeling (PL) is a general SSL approach that does not have this constraint but performs relatively poorly in its original formulation.
We argue that PL underperforms due to the erroneous high confidence predictions from poorly calibrated models.
We propose an uncertainty-aware pseudo-label selection (UPS) framework which improves pseudo labeling accuracy by drastically reducing the amount of noise encountered in the training process.
arXiv Detail & Related papers (2021-01-15T23:29:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.