Learning Discriminative Features using Multi-label Dual Space
- URL: http://arxiv.org/abs/2102.13234v1
- Date: Thu, 25 Feb 2021 23:53:21 GMT
- Title: Learning Discriminative Features using Multi-label Dual Space
- Authors: Ali Braytee and Wei Liu
- Abstract summary: We propose a novel method in multi-label learning to learn the projection matrix from the feature space to semantic label space.
We show that the learned projection matrix identifies a subset of discriminative features across multiple semantic labels.
- Score: 8.510041322997116
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-label learning handles instances associated with multiple class labels.
The original label space is a logical matrix with entries from the Boolean
domain $\in \left \{ 0,1 \right \}$. Logical labels are not able to show the
relative importance of each semantic label to the instances. The vast majority
of existing methods map the input features to the label space using linear
projections with taking into consideration the label dependencies using logical
label matrix. However, the discriminative features are learned using one-way
projection from the feature representation of an instance into a logical label
space. Given that there is no manifold in the learning space of logical labels,
which limits the potential of learned models. In this work, inspired from a
real-world example in image annotation to reconstruct an image from the label
importance and feature weights. We propose a novel method in multi-label
learning to learn the projection matrix from the feature space to semantic
label space and projects it back to the original feature space using
encoder-decoder deep learning architecture. The key intuition which guides our
method is that the discriminative features are identified due to map the
features back and forth using two linear projections. To the best of our
knowledge, this is one of the first attempts to study the ability to
reconstruct the original features from the label manifold in multi-label
learning. We show that the learned projection matrix identifies a subset of
discriminative features across multiple semantic labels. Extensive experiments
on real-world datasets show the superiority of the proposed method.
Related papers
- Label merge-and-split: A graph-colouring approach for memory-efficient brain parcellation [3.2506898256325933]
Whole brain parcellation requires inferring hundreds of segmentation labels in large image volumes.
We introduce label merge-and-split, a method that first greatly reduces the effective number of labels required for learning-based whole brain parcellation.
arXiv Detail & Related papers (2024-04-16T13:47:27Z) - Label Learning Method Based on Tensor Projection [82.51786483693206]
We propose a label learning method based on tensor projection (LLMTP)
We extend the matrix projection transformation to tensor projection, so that the spatial structure information between views can be fully utilized.
In addition, we introduce the tensor Schatten $p$-norm regularization to make the clustering label matrices of different views as consistent as possible.
arXiv Detail & Related papers (2024-02-26T13:03:26Z) - Active Learning for Semantic Segmentation with Multi-class Label Query [34.49769523529307]
This paper proposes a new active learning method for semantic segmentation.
It introduces the class ambiguity issue in training as it assigns partial labels to individual pixels.
In the first stage, it trains a segmentation model directly with the partial labels.
In the second stage, it disambiguates the partial labels by generating pixel-wise pseudo labels.
arXiv Detail & Related papers (2023-09-17T16:23:34Z) - Multi-Label Knowledge Distillation [86.03990467785312]
We propose a novel multi-label knowledge distillation method.
On one hand, it exploits the informative semantic knowledge from the logits by dividing the multi-label learning problem into a set of binary classification problems.
On the other hand, it enhances the distinctiveness of the learned feature representations by leveraging the structural information of label-wise embeddings.
arXiv Detail & Related papers (2023-08-12T03:19:08Z) - Towards Imbalanced Large Scale Multi-label Classification with Partially
Annotated Labels [8.977819892091]
Multi-label classification is a widely encountered problem in daily life, where an instance can be associated with multiple classes.
In this work, we address the issue of label imbalance and investigate how to train neural networks using partial labels.
arXiv Detail & Related papers (2023-07-31T21:50:48Z) - Description-Enhanced Label Embedding Contrastive Learning for Text
Classification [65.01077813330559]
Self-Supervised Learning (SSL) in model learning process and design a novel self-supervised Relation of Relation (R2) classification task.
Relation of Relation Learning Network (R2-Net) for text classification, in which text classification and R2 classification are treated as optimization targets.
external knowledge from WordNet to obtain multi-aspect descriptions for label semantic learning.
arXiv Detail & Related papers (2023-06-15T02:19:34Z) - Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations [91.67511167969934]
imprecise label learning (ILL) is a framework for the unification of learning with various imprecise label configurations.
We demonstrate that ILL can seamlessly adapt to partial label learning, semi-supervised learning, noisy label learning, and, more importantly, a mixture of these settings.
arXiv Detail & Related papers (2023-05-22T04:50:28Z) - Contrastive Label Enhancement [13.628665406039609]
We propose Contrastive Label Enhancement (ConLE) to generate high-level features by contrastive learning strategy.
We leverage the obtained high-level features to gain label distributions through a welldesigned training strategy.
arXiv Detail & Related papers (2023-05-16T14:53:07Z) - Multi-Instance Partial-Label Learning: Towards Exploiting Dual Inexact
Supervision [53.530957567507365]
In some real-world tasks, each training sample is associated with a candidate label set that contains one ground-truth label and some false positive labels.
In this paper, we formalize such problems as multi-instance partial-label learning (MIPL)
Existing multi-instance learning algorithms and partial-label learning algorithms are suboptimal for solving MIPL problems.
arXiv Detail & Related papers (2022-12-18T03:28:51Z) - Acknowledging the Unknown for Multi-label Learning with Single Positive
Labels [65.5889334964149]
Traditionally, all unannotated labels are assumed as negative labels in single positive multi-label learning (SPML)
We propose entropy-maximization (EM) loss to maximize the entropy of predicted probabilities for all unannotated labels.
Considering the positive-negative label imbalance of unannotated labels, we propose asymmetric pseudo-labeling (APL) with asymmetric-tolerance strategies and a self-paced procedure to provide more precise supervision.
arXiv Detail & Related papers (2022-03-30T11:43:59Z) - Structured Semantic Transfer for Multi-Label Recognition with Partial
Labels [85.6967666661044]
We propose a structured semantic transfer (SST) framework that enables training multi-label recognition models with partial labels.
The framework consists of two complementary transfer modules that explore within-image and cross-image semantic correlations.
Experiments on the Microsoft COCO, Visual Genome and Pascal VOC datasets show that the proposed SST framework obtains superior performance over current state-of-the-art algorithms.
arXiv Detail & Related papers (2021-12-21T02:15:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.