Pushing One Pair of Labels Apart Each Time in Multi-Label Learning: From
Single Positive to Full Labels
- URL: http://arxiv.org/abs/2302.14695v1
- Date: Tue, 28 Feb 2023 16:08:12 GMT
- Title: Pushing One Pair of Labels Apart Each Time in Multi-Label Learning: From
Single Positive to Full Labels
- Authors: Xiang Li, Xinrui Wang, Songcan Chen
- Abstract summary: In Multi-Label Learning (MLL), it is extremely challenging to accurately annotate every appearing object due to expensive costs and limited knowledge.
Existing Multi-Label Learning methods assume unknown labels as negatives, which introduces false negatives as noisy labels.
We propose a more practical and cheaper alternative: Single Positive Multi-Label Learning (SPMLL), where only one positive label needs to be provided per sample.
- Score: 29.11589378265006
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In Multi-Label Learning (MLL), it is extremely challenging to accurately
annotate every appearing object due to expensive costs and limited knowledge.
When facing such a challenge, a more practical and cheaper alternative should
be Single Positive Multi-Label Learning (SPMLL), where only one positive label
needs to be provided per sample. Existing SPMLL methods usually assume unknown
labels as negatives, which inevitably introduces false negatives as noisy
labels. More seriously, Binary Cross Entropy (BCE) loss is often used for
training, which is notoriously not robust to noisy labels. To mitigate this
issue, we customize an objective function for SPMLL by pushing only one pair of
labels apart each time to prevent the domination of negative labels, which is
the main culprit of fitting noisy labels in SPMLL. To further combat such noisy
labels, we explore the high-rankness of label matrix, which can also push apart
different labels. By directly extending from SPMLL to MLL with full labels, a
unified loss applicable to both settings is derived. Experiments on real
datasets demonstrate that the proposed loss not only performs more robustly to
noisy labels for SPMLL but also works well for full labels. Besides, we
empirically discover that high-rankness can mitigate the dramatic performance
drop in SPMLL. Most surprisingly, even without any regularization or fine-tuned
label correction, only adopting our loss defeats state-of-the-art SPMLL methods
on CUB, a dataset that severely lacks labels.
Related papers
- Positive Label Is All You Need for Multi-Label Classification [3.354528906571718]
Multi-label classification (MLC) faces challenges from label noise in training data.
Our paper addresses label noise in MLC by introducing a positive and unlabeled multi-label classification (PU-MLC) method.
PU-MLC employs positive-unlabeled learning, training the model with only positive labels and unlabeled data.
arXiv Detail & Related papers (2023-06-28T08:44:00Z) - Understanding Label Bias in Single Positive Multi-Label Learning [20.09309971112425]
It is possible to train effective multi-labels using only one positive label per image.
Standard benchmarks for SPML are derived from traditional multi-label classification datasets.
This work introduces protocols for studying label bias in SPML and provides new empirical results.
arXiv Detail & Related papers (2023-05-24T21:41:08Z) - Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations [91.67511167969934]
imprecise label learning (ILL) is a framework for the unification of learning with various imprecise label configurations.
We demonstrate that ILL can seamlessly adapt to partial label learning, semi-supervised learning, noisy label learning, and, more importantly, a mixture of these settings.
arXiv Detail & Related papers (2023-05-22T04:50:28Z) - Complementary to Multiple Labels: A Correlation-Aware Correction
Approach [65.59584909436259]
We show theoretically how the estimated transition matrix in multi-class CLL could be distorted in multi-labeled cases.
We propose a two-step method to estimate the transition matrix from candidate labels.
arXiv Detail & Related papers (2023-02-25T04:48:48Z) - Large Loss Matters in Weakly Supervised Multi-Label Classification [50.262533546999045]
We first regard unobserved labels as negative labels, casting the W task into noisy multi-label classification.
We propose novel methods for W which reject or correct the large loss samples to prevent model from memorizing the noisy label.
Our methodology actually works well, validating that treating large loss properly matters in a weakly supervised multi-label classification.
arXiv Detail & Related papers (2022-06-08T08:30:24Z) - One Positive Label is Sufficient: Single-Positive Multi-Label Learning
with Label Enhancement [71.9401831465908]
We investigate single-positive multi-label learning (SPMLL) where each example is annotated with only one relevant label.
A novel method named proposed, i.e., Single-positive MultI-label learning with Label Enhancement, is proposed.
Experiments on benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-06-01T14:26:30Z) - Acknowledging the Unknown for Multi-label Learning with Single Positive
Labels [65.5889334964149]
Traditionally, all unannotated labels are assumed as negative labels in single positive multi-label learning (SPML)
We propose entropy-maximization (EM) loss to maximize the entropy of predicted probabilities for all unannotated labels.
Considering the positive-negative label imbalance of unannotated labels, we propose asymmetric pseudo-labeling (APL) with asymmetric-tolerance strategies and a self-paced procedure to provide more precise supervision.
arXiv Detail & Related papers (2022-03-30T11:43:59Z) - Multi-Label Gold Asymmetric Loss Correction with Single-Label Regulators [6.129273021888717]
We propose a novel Gold Asymmetric Loss Correction with Single-Label Regulators (GALC-SLR) that operates robust against noisy labels.
GALC-SLR estimates the noise confusion matrix using single-label samples, then constructs an asymmetric loss correction via estimated confusion matrix to avoid overfitting to the noisy labels.
Empirical results show that our method outperforms the state-of-the-art original asymmetric loss multi-label classifier under all corruption levels.
arXiv Detail & Related papers (2021-08-04T12:57:29Z) - A Study on the Autoregressive and non-Autoregressive Multi-label
Learning [77.11075863067131]
We propose a self-attention based variational encoder-model to extract the label-label and label-feature dependencies jointly.
Our model can therefore be used to predict all labels in parallel while still including both label-label and label-feature dependencies.
arXiv Detail & Related papers (2020-12-03T05:41:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.