PLMCL: Partial-Label Momentum Curriculum Learning for Multi-Label Image
Classification
- URL: http://arxiv.org/abs/2208.09999v1
- Date: Mon, 22 Aug 2022 01:23:08 GMT
- Title: PLMCL: Partial-Label Momentum Curriculum Learning for Multi-Label Image
Classification
- Authors: Rabab Abdelfattah, Xin Zhang, Zhenyao Wu, Xinyi Wu, Xiaofeng Wang, and
Song Wang
- Abstract summary: Multi-label image classification aims to predict all possible labels in an image.
Existing works on partial-label learning focus on the case where each training image is annotated with only a subset of its labels.
This paper proposes a new partial-label setting in which only a subset of the training images are labeled, each with only one positive label, while the rest of the training images remain unlabeled.
- Score: 25.451065364433028
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-label image classification aims to predict all possible labels in an
image. It is usually formulated as a partial-label learning problem, given the
fact that it could be expensive in practice to annotate all labels in every
training image. Existing works on partial-label learning focus on the case
where each training image is annotated with only a subset of its labels. A
special case is to annotate only one positive label in each training image. To
further relieve the annotation burden and enhance the performance of the
classifier, this paper proposes a new partial-label setting in which only a
subset of the training images are labeled, each with only one positive label,
while the rest of the training images remain unlabeled. To handle this new
setting, we propose an end-to-end deep network, PLMCL (Partial Label Momentum
Curriculum Learning), that can learn to produce confident pseudo labels for
both partially-labeled and unlabeled training images. The novel momentum-based
law updates soft pseudo labels on each training image with the consideration of
the updating velocity of pseudo labels, which help avoid trapping to
low-confidence local minimum, especially at the early stage of training in lack
of both observed labels and confidence on pseudo labels. In addition, we
present a confidence-aware scheduler to adaptively perform easy-to-hard
learning for different labels. Extensive experiments demonstrate that our
proposed PLMCL outperforms many state-of-the-art multi-label classification
methods under various partial-label settings on three different datasets.
Related papers
- Determined Multi-Label Learning via Similarity-Based Prompt [12.428779617221366]
In multi-label classification, each training instance is associated with multiple class labels simultaneously.
To alleviate this problem, a novel labeling setting termed textitDetermined Multi-Label Learning (DMLL) is proposed.
arXiv Detail & Related papers (2024-03-25T07:08:01Z) - Vision-Language Pseudo-Labels for Single-Positive Multi-Label Learning [11.489541220229798]
In general multi-label learning, a model learns to predict multiple labels or categories for a single input image.
This is in contrast with standard multi-class image classification, where the task is predicting a single label from many possible labels for an image.
arXiv Detail & Related papers (2023-10-24T16:36:51Z) - Distilling Self-Supervised Vision Transformers for Weakly-Supervised
Few-Shot Classification & Segmentation [58.03255076119459]
We address the task of weakly-supervised few-shot image classification and segmentation, by leveraging a Vision Transformer (ViT)
Our proposed method takes token representations from the self-supervised ViT and leverages their correlations, via self-attention, to produce classification and segmentation predictions.
Experiments on Pascal-5i and COCO-20i demonstrate significant performance gains in a variety of supervision settings.
arXiv Detail & Related papers (2023-07-07T06:16:43Z) - Pseudo Labels for Single Positive Multi-Label Learning [0.0]
Single positive multi-label (SPML) learning is a cost-effective solution, where models are trained on a single positive label per image.
In this work, we propose a method to turn single positive data into fully-labeled data: Pseudo Multi-Labels.
arXiv Detail & Related papers (2023-06-01T17:21:42Z) - Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations [91.67511167969934]
imprecise label learning (ILL) is a framework for the unification of learning with various imprecise label configurations.
We demonstrate that ILL can seamlessly adapt to partial label learning, semi-supervised learning, noisy label learning, and, more importantly, a mixture of these settings.
arXiv Detail & Related papers (2023-05-22T04:50:28Z) - G2NetPL: Generic Game-Theoretic Network for Partial-Label Image
Classification [14.82038002764209]
Multi-label image classification aims to predict all possible labels in an image.
Existing works on partial-label learning focus on the case where each training image is labeled with only a subset of its positive/negative labels.
This paper proposes an end-to-end Generic Game-theoretic Network (G2NetPL) for partial-label learning.
arXiv Detail & Related papers (2022-10-20T17:59:21Z) - One Positive Label is Sufficient: Single-Positive Multi-Label Learning
with Label Enhancement [71.9401831465908]
We investigate single-positive multi-label learning (SPMLL) where each example is annotated with only one relevant label.
A novel method named proposed, i.e., Single-positive MultI-label learning with Label Enhancement, is proposed.
Experiments on benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-06-01T14:26:30Z) - Acknowledging the Unknown for Multi-label Learning with Single Positive
Labels [65.5889334964149]
Traditionally, all unannotated labels are assumed as negative labels in single positive multi-label learning (SPML)
We propose entropy-maximization (EM) loss to maximize the entropy of predicted probabilities for all unannotated labels.
Considering the positive-negative label imbalance of unannotated labels, we propose asymmetric pseudo-labeling (APL) with asymmetric-tolerance strategies and a self-paced procedure to provide more precise supervision.
arXiv Detail & Related papers (2022-03-30T11:43:59Z) - Structured Semantic Transfer for Multi-Label Recognition with Partial
Labels [85.6967666661044]
We propose a structured semantic transfer (SST) framework that enables training multi-label recognition models with partial labels.
The framework consists of two complementary transfer modules that explore within-image and cross-image semantic correlations.
Experiments on the Microsoft COCO, Visual Genome and Pascal VOC datasets show that the proposed SST framework obtains superior performance over current state-of-the-art algorithms.
arXiv Detail & Related papers (2021-12-21T02:15:01Z) - Multi-Label Learning from Single Positive Labels [37.17676289125165]
Predicting all applicable labels for a given image is known as multi-label classification.
We show that it is possible to approach the performance of fully labeled classifiers despite training with significantly fewer confirmed labels.
arXiv Detail & Related papers (2021-06-17T17:58:04Z) - Unsupervised Person Re-identification via Multi-label Classification [55.65870468861157]
This paper formulates unsupervised person ReID as a multi-label classification task to progressively seek true labels.
Our method starts by assigning each person image with a single-class label, then evolves to multi-label classification by leveraging the updated ReID model for label prediction.
To boost the ReID model training efficiency in multi-label classification, we propose the memory-based multi-label classification loss (MMCL)
arXiv Detail & Related papers (2020-04-20T12:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.