Multi-Label Learning from Single Positive Labels
- URL: http://arxiv.org/abs/2106.09708v1
- Date: Thu, 17 Jun 2021 17:58:04 GMT
- Title: Multi-Label Learning from Single Positive Labels
- Authors: Elijah Cole, Oisin Mac Aodha, Titouan Lorieul, Pietro Perona, Dan
Morris, Nebojsa Jojic
- Abstract summary: Predicting all applicable labels for a given image is known as multi-label classification.
We show that it is possible to approach the performance of fully labeled classifiers despite training with significantly fewer confirmed labels.
- Score: 37.17676289125165
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Predicting all applicable labels for a given image is known as multi-label
classification. Compared to the standard multi-class case (where each image has
only one label), it is considerably more challenging to annotate training data
for multi-label classification. When the number of potential labels is large,
human annotators find it difficult to mention all applicable labels for each
training image. Furthermore, in some settings detection is intrinsically
difficult e.g. finding small object instances in high resolution images. As a
result, multi-label training data is often plagued by false negatives. We
consider the hardest version of this problem, where annotators provide only one
relevant label for each image. As a result, training sets will have only one
positive label per image and no confirmed negatives. We explore this special
case of learning from missing labels across four different multi-label image
classification datasets for both linear classifiers and end-to-end fine-tuned
deep networks. We extend existing multi-label losses to this setting and
propose novel variants that constrain the number of expected positive labels
during training. Surprisingly, we show that in some cases it is possible to
approach the performance of fully labeled classifiers despite training with
significantly fewer confirmed labels.
Related papers
- Determined Multi-Label Learning via Similarity-Based Prompt [12.428779617221366]
In multi-label classification, each training instance is associated with multiple class labels simultaneously.
To alleviate this problem, a novel labeling setting termed textitDetermined Multi-Label Learning (DMLL) is proposed.
arXiv Detail & Related papers (2024-03-25T07:08:01Z) - Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations [91.67511167969934]
imprecise label learning (ILL) is a framework for the unification of learning with various imprecise label configurations.
We demonstrate that ILL can seamlessly adapt to partial label learning, semi-supervised learning, noisy label learning, and, more importantly, a mixture of these settings.
arXiv Detail & Related papers (2023-05-22T04:50:28Z) - Bridging the Gap between Model Explanations in Partially Annotated
Multi-label Classification [85.76130799062379]
We study how false negative labels affect the model's explanation.
We propose to boost the attribution scores of the model trained with partial labels to make its explanation resemble that of the model trained with full labels.
arXiv Detail & Related papers (2023-04-04T14:00:59Z) - Reliable Representations Learning for Incomplete Multi-View Partial Multi-Label Classification [78.15629210659516]
In this paper, we propose an incomplete multi-view partial multi-label classification network named RANK.
We break through the view-level weights inherent in existing methods and propose a quality-aware sub-network to dynamically assign quality scores to each view of each sample.
Our model is not only able to handle complete multi-view multi-label datasets, but also works on datasets with missing instances and labels.
arXiv Detail & Related papers (2023-03-30T03:09:25Z) - Identifying Incorrect Annotations in Multi-Label Classification Data [14.94741409713251]
We consider algorithms for finding mislabeled examples in multi-label classification datasets.
We propose an extension of the Confident Learning framework to this setting, as well as a label quality score that ranks examples with label errors much higher than those which are correctly labeled.
arXiv Detail & Related papers (2022-11-25T05:03:56Z) - G2NetPL: Generic Game-Theoretic Network for Partial-Label Image
Classification [14.82038002764209]
Multi-label image classification aims to predict all possible labels in an image.
Existing works on partial-label learning focus on the case where each training image is labeled with only a subset of its positive/negative labels.
This paper proposes an end-to-end Generic Game-theoretic Network (G2NetPL) for partial-label learning.
arXiv Detail & Related papers (2022-10-20T17:59:21Z) - PLMCL: Partial-Label Momentum Curriculum Learning for Multi-Label Image
Classification [25.451065364433028]
Multi-label image classification aims to predict all possible labels in an image.
Existing works on partial-label learning focus on the case where each training image is annotated with only a subset of its labels.
This paper proposes a new partial-label setting in which only a subset of the training images are labeled, each with only one positive label, while the rest of the training images remain unlabeled.
arXiv Detail & Related papers (2022-08-22T01:23:08Z) - Acknowledging the Unknown for Multi-label Learning with Single Positive
Labels [65.5889334964149]
Traditionally, all unannotated labels are assumed as negative labels in single positive multi-label learning (SPML)
We propose entropy-maximization (EM) loss to maximize the entropy of predicted probabilities for all unannotated labels.
Considering the positive-negative label imbalance of unannotated labels, we propose asymmetric pseudo-labeling (APL) with asymmetric-tolerance strategies and a self-paced procedure to provide more precise supervision.
arXiv Detail & Related papers (2022-03-30T11:43:59Z) - Re-labeling ImageNet: from Single to Multi-Labels, from Global to
Localized Labels [34.13899937264952]
ImageNet has been arguably the most popular image classification benchmark, but it is also the one with a significant level of label noise.
Recent studies have shown that many samples contain multiple classes, despite being assumed to be a single-label benchmark.
We argue that the mismatch between single-label annotations and effectively multi-label images is equally, if not more, problematic in the training setup, where random crops are applied.
arXiv Detail & Related papers (2021-01-13T11:55:58Z) - One-bit Supervision for Image Classification [121.87598671087494]
One-bit supervision is a novel setting of learning from incomplete annotations.
We propose a multi-stage training paradigm which incorporates negative label suppression into an off-the-shelf semi-supervised learning algorithm.
arXiv Detail & Related papers (2020-09-14T03:06:23Z) - Unsupervised Person Re-identification via Multi-label Classification [55.65870468861157]
This paper formulates unsupervised person ReID as a multi-label classification task to progressively seek true labels.
Our method starts by assigning each person image with a single-class label, then evolves to multi-label classification by leveraging the updated ReID model for label prediction.
To boost the ReID model training efficiency in multi-label classification, we propose the memory-based multi-label classification loss (MMCL)
arXiv Detail & Related papers (2020-04-20T12:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.