Acknowledging the Unknown for Multi-label Learning with Single Positive
Labels
- URL: http://arxiv.org/abs/2203.16219v1
- Date: Wed, 30 Mar 2022 11:43:59 GMT
- Title: Acknowledging the Unknown for Multi-label Learning with Single Positive
Labels
- Authors: Donghao Zhou, Pengfei Chen, Qiong Wang, Guangyong Chen, Pheng-Ann Heng
- Abstract summary: Traditionally, all unannotated labels are assumed as negative labels in single positive multi-label learning (SPML)
We propose entropy-maximization (EM) loss to maximize the entropy of predicted probabilities for all unannotated labels.
Considering the positive-negative label imbalance of unannotated labels, we propose asymmetric pseudo-labeling (APL) with asymmetric-tolerance strategies and a self-paced procedure to provide more precise supervision.
- Score: 65.5889334964149
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the difficulty of collecting exhaustive multi-label annotations,
multi-label training data often contains partial labels. We consider an extreme
of this problem, called single positive multi-label learning (SPML), where each
multi-label training image has only one positive label. Traditionally, all
unannotated labels are assumed as negative labels in SPML, which would
introduce false negative labels and make model training be dominated by assumed
negative labels. In this work, we choose to treat all unannotated labels from a
different perspective, \textit{i.e.} acknowledging they are unknown. Hence, we
propose entropy-maximization (EM) loss to maximize the entropy of predicted
probabilities for all unannotated labels. Considering the positive-negative
label imbalance of unannotated labels, we propose asymmetric pseudo-labeling
(APL) with asymmetric-tolerance strategies and a self-paced procedure to
provide more precise supervision. Experiments show that our method
significantly improves performance and achieves state-of-the-art results on all
four benchmarks.
Related papers
- Towards Imbalanced Large Scale Multi-label Classification with Partially
Annotated Labels [8.977819892091]
Multi-label classification is a widely encountered problem in daily life, where an instance can be associated with multiple classes.
In this work, we address the issue of label imbalance and investigate how to train neural networks using partial labels.
arXiv Detail & Related papers (2023-07-31T21:50:48Z) - Understanding Label Bias in Single Positive Multi-Label Learning [20.09309971112425]
It is possible to train effective multi-labels using only one positive label per image.
Standard benchmarks for SPML are derived from traditional multi-label classification datasets.
This work introduces protocols for studying label bias in SPML and provides new empirical results.
arXiv Detail & Related papers (2023-05-24T21:41:08Z) - Bridging the Gap between Model Explanations in Partially Annotated
Multi-label Classification [85.76130799062379]
We study how false negative labels affect the model's explanation.
We propose to boost the attribution scores of the model trained with partial labels to make its explanation resemble that of the model trained with full labels.
arXiv Detail & Related papers (2023-04-04T14:00:59Z) - Pushing One Pair of Labels Apart Each Time in Multi-Label Learning: From
Single Positive to Full Labels [29.11589378265006]
In Multi-Label Learning (MLL), it is extremely challenging to accurately annotate every appearing object due to expensive costs and limited knowledge.
Existing Multi-Label Learning methods assume unknown labels as negatives, which introduces false negatives as noisy labels.
We propose a more practical and cheaper alternative: Single Positive Multi-Label Learning (SPMLL), where only one positive label needs to be provided per sample.
arXiv Detail & Related papers (2023-02-28T16:08:12Z) - Dist-PU: Positive-Unlabeled Learning from a Label Distribution
Perspective [89.5370481649529]
We propose a label distribution perspective for PU learning in this paper.
Motivated by this, we propose to pursue the label distribution consistency between predicted and ground-truth label distributions.
Experiments on three benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-12-06T07:38:29Z) - One Positive Label is Sufficient: Single-Positive Multi-Label Learning
with Label Enhancement [71.9401831465908]
We investigate single-positive multi-label learning (SPMLL) where each example is annotated with only one relevant label.
A novel method named proposed, i.e., Single-positive MultI-label learning with Label Enhancement, is proposed.
Experiments on benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-06-01T14:26:30Z) - Disentangling Sampling and Labeling Bias for Learning in Large-Output
Spaces [64.23172847182109]
We show that different negative sampling schemes implicitly trade-off performance on dominant versus rare labels.
We provide a unified means to explicitly tackle both sampling bias, arising from working with a subset of all labels, and labeling bias, which is inherent to the data due to label imbalance.
arXiv Detail & Related papers (2021-05-12T15:40:13Z) - A Study on the Autoregressive and non-Autoregressive Multi-label
Learning [77.11075863067131]
We propose a self-attention based variational encoder-model to extract the label-label and label-feature dependencies jointly.
Our model can therefore be used to predict all labels in parallel while still including both label-label and label-feature dependencies.
arXiv Detail & Related papers (2020-12-03T05:41:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.