Label Structure Preserving Contrastive Embedding for Multi-Label
Learning with Missing Labels
- URL: http://arxiv.org/abs/2209.01314v1
- Date: Sat, 3 Sep 2022 02:44:07 GMT
- Title: Label Structure Preserving Contrastive Embedding for Multi-Label
Learning with Missing Labels
- Authors: Zhongchen Ma, Lisha Li, Qirong Mao and Songcan Chen
- Abstract summary: We introduce a label correction mechanism to identify missing labels, then define a unique contrastive loss for multi-label image classification with missing labels (CLML)
Different from existing multi-label CL losses, CLML also preserves low-rank global and local label dependencies in the latent representation space.
The proposed strategy has been shown to improve the classification performance of the Resnet101 model by margins of 1.2%, 1.6%, and 1.3% respectively on three standard datasets.
- Score: 30.79809627981242
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrastive learning (CL) has shown impressive advances in image
representation learning in whichever supervised multi-class classification or
unsupervised learning. However, these CL methods fail to be directly adapted to
multi-label image classification due to the difficulty in defining the positive
and negative instances to contrast a given anchor image in multi-label
scenario, let the label missing one alone, implying that borrowing a
commonly-used way from contrastive multi-class learning to define them will
incur a lot of false negative instances unfavorable for learning. In this
paper, with the introduction of a label correction mechanism to identify
missing labels, we first elegantly generate positives and negatives for
individual semantic labels of an anchor image, then define a unique contrastive
loss for multi-label image classification with missing labels (CLML), the loss
is able to accurately bring images close to their true positive images and
false negative images, far away from their true negative images. Different from
existing multi-label CL losses, CLML also preserves low-rank global and local
label dependencies in the latent representation space where such dependencies
have been shown to be helpful in dealing with missing labels. To the best of
our knowledge, this is the first general multi-label CL loss in the
missing-label scenario and thus can seamlessly be paired with those losses of
any existing multi-label learning methods just via a single hyperparameter. The
proposed strategy has been shown to improve the classification performance of
the Resnet101 model by margins of 1.2%, 1.6%, and 1.3% respectively on three
standard datasets, MSCOCO, VOC, and NUS-WIDE. Code is available at
https://github.com/chuangua/ContrastiveLossMLML.
Related papers
- Multi-label Cluster Discrimination for Visual Representation Learning [27.552024985952166]
We propose a novel Multi-Label Cluster Discrimination method named MLCD to enhance representation learning.
Our method achieves state-of-the-art performance on multiple downstream tasks including linear probe, zero-shot classification, and image-text retrieval.
arXiv Detail & Related papers (2024-07-24T14:54:16Z) - Positive Label Is All You Need for Multi-Label Classification [3.354528906571718]
Multi-label classification (MLC) faces challenges from label noise in training data.
Our paper addresses label noise in MLC by introducing a positive and unlabeled multi-label classification (PU-MLC) method.
PU-MLC employs positive-unlabeled learning, training the model with only positive labels and unlabeled data.
arXiv Detail & Related papers (2023-06-28T08:44:00Z) - G2NetPL: Generic Game-Theoretic Network for Partial-Label Image
Classification [14.82038002764209]
Multi-label image classification aims to predict all possible labels in an image.
Existing works on partial-label learning focus on the case where each training image is labeled with only a subset of its positive/negative labels.
This paper proposes an end-to-end Generic Game-theoretic Network (G2NetPL) for partial-label learning.
arXiv Detail & Related papers (2022-10-20T17:59:21Z) - PLMCL: Partial-Label Momentum Curriculum Learning for Multi-Label Image
Classification [25.451065364433028]
Multi-label image classification aims to predict all possible labels in an image.
Existing works on partial-label learning focus on the case where each training image is annotated with only a subset of its labels.
This paper proposes a new partial-label setting in which only a subset of the training images are labeled, each with only one positive label, while the rest of the training images remain unlabeled.
arXiv Detail & Related papers (2022-08-22T01:23:08Z) - Dual-Perspective Semantic-Aware Representation Blending for Multi-Label
Image Recognition with Partial Labels [70.36722026729859]
We propose a dual-perspective semantic-aware representation blending (DSRB) that blends multi-granularity category-specific semantic representation across different images.
The proposed DS consistently outperforms current state-of-the-art algorithms on all proportion label settings.
arXiv Detail & Related papers (2022-05-26T00:33:44Z) - Acknowledging the Unknown for Multi-label Learning with Single Positive
Labels [65.5889334964149]
Traditionally, all unannotated labels are assumed as negative labels in single positive multi-label learning (SPML)
We propose entropy-maximization (EM) loss to maximize the entropy of predicted probabilities for all unannotated labels.
Considering the positive-negative label imbalance of unannotated labels, we propose asymmetric pseudo-labeling (APL) with asymmetric-tolerance strategies and a self-paced procedure to provide more precise supervision.
arXiv Detail & Related papers (2022-03-30T11:43:59Z) - Semantic-Aware Representation Blending for Multi-Label Image Recognition
with Partial Labels [86.17081952197788]
We propose to blend category-specific representation across different images to transfer information of known labels to complement unknown labels.
Experiments on the MS-COCO, Visual Genome, Pascal VOC 2007 datasets show that the proposed SARB framework obtains superior performance over current leading competitors.
arXiv Detail & Related papers (2022-03-04T07:56:16Z) - Structured Semantic Transfer for Multi-Label Recognition with Partial
Labels [85.6967666661044]
We propose a structured semantic transfer (SST) framework that enables training multi-label recognition models with partial labels.
The framework consists of two complementary transfer modules that explore within-image and cross-image semantic correlations.
Experiments on the Microsoft COCO, Visual Genome and Pascal VOC datasets show that the proposed SST framework obtains superior performance over current state-of-the-art algorithms.
arXiv Detail & Related papers (2021-12-21T02:15:01Z) - Mixed Supervision Learning for Whole Slide Image Classification [88.31842052998319]
We propose a mixed supervision learning framework for super high-resolution images.
During the patch training stage, this framework can make use of coarse image-level labels to refine self-supervised learning.
A comprehensive strategy is proposed to suppress pixel-level false positives and false negatives.
arXiv Detail & Related papers (2021-07-02T09:46:06Z) - Multi-Label Learning from Single Positive Labels [37.17676289125165]
Predicting all applicable labels for a given image is known as multi-label classification.
We show that it is possible to approach the performance of fully labeled classifiers despite training with significantly fewer confirmed labels.
arXiv Detail & Related papers (2021-06-17T17:58:04Z) - Unsupervised Person Re-identification via Multi-label Classification [55.65870468861157]
This paper formulates unsupervised person ReID as a multi-label classification task to progressively seek true labels.
Our method starts by assigning each person image with a single-class label, then evolves to multi-label classification by leveraging the updated ReID model for label prediction.
To boost the ReID model training efficiency in multi-label classification, we propose the memory-based multi-label classification loss (MMCL)
arXiv Detail & Related papers (2020-04-20T12:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.