Partial Multi-label Learning with Label and Feature Collaboration
- URL: http://arxiv.org/abs/2003.07578v1
- Date: Tue, 17 Mar 2020 08:34:45 GMT
- Title: Partial Multi-label Learning with Label and Feature Collaboration
- Authors: Tingting Yu, Guoxian Yu, Jun Wang, Maozu Guo
- Abstract summary: Partial multi-label learning (PML) models the scenario where each training instance is annotated with a set of candidate labels.
To achieve a credible predictor on PML data, we propose PML-LFC (Partial Multi-label Learning with Label and Feature Collaboration)
PML-LFC estimates the confidence values of relevant labels for each instance using the similarity from both the label and feature spaces.
- Score: 21.294791188490056
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Partial multi-label learning (PML) models the scenario where each training
instance is annotated with a set of candidate labels, and only some of the
labels are relevant. The PML problem is practical in real-world scenarios, as
it is difficult and even impossible to obtain precisely labeled samples.
Several PML solutions have been proposed to combat with the prone misled by the
irrelevant labels concealed in the candidate labels, but they generally focus
on the smoothness assumption in feature space or low-rank assumption in label
space, while ignore the negative information between features and labels.
Specifically, if two instances have largely overlapped candidate labels,
irrespective of their feature similarity, their ground-truth labels should be
similar; while if they are dissimilar in the feature and candidate label space,
their ground-truth labels should be dissimilar with each other. To achieve a
credible predictor on PML data, we propose a novel approach called PML-LFC
(Partial Multi-label Learning with Label and Feature Collaboration). PML-LFC
estimates the confidence values of relevant labels for each instance using the
similarity from both the label and feature spaces, and trains the desired
predictor with the estimated confidence values. PML-LFC achieves the predictor
and the latent label matrix in a reciprocal reinforce manner by a unified
model, and develops an alternative optimization procedure to optimize them.
Extensive empirical study on both synthetic and real-world datasets
demonstrates the superiority of PML-LFC.
Related papers
- Exploiting Conjugate Label Information for Multi-Instance Partial-Label Learning [61.00359941983515]
Multi-instance partial-label learning (MIPL) addresses scenarios where each training sample is represented as a multi-instance bag associated with a candidate label set containing one true label and several false positives.
ELIMIPL exploits the conjugate label information to improve the disambiguation performance.
arXiv Detail & Related papers (2024-08-26T15:49:31Z) - Scalable Label Distribution Learning for Multi-Label Classification [43.52928088881866]
Multi-label classification (MLC) refers to the problem of tagging a given instance with a set of relevant labels.
Most existing MLC methods are based on the assumption that the correlation of two labels in each label pair is symmetric.
Most existing methods design learning processes associated with the number of labels, which makes their computational complexity a bottleneck when scaling up to large-scale output space.
arXiv Detail & Related papers (2023-11-28T06:52:53Z) - Disambiguated Attention Embedding for Multi-Instance Partial-Label
Learning [68.56193228008466]
In many real-world tasks, the concerned objects can be represented as a multi-instance bag associated with a candidate label set.
Existing MIPL approach follows the instance-space paradigm by assigning augmented candidate label sets of bags to each instance and aggregating bag-level labels from instance-level labels.
We propose an intuitive algorithm named DEMIPL, i.e., Disambiguated attention Embedding for Multi-Instance Partial-Label learning.
arXiv Detail & Related papers (2023-05-26T13:25:17Z) - Understanding Label Bias in Single Positive Multi-Label Learning [20.09309971112425]
It is possible to train effective multi-labels using only one positive label per image.
Standard benchmarks for SPML are derived from traditional multi-label classification datasets.
This work introduces protocols for studying label bias in SPML and provides new empirical results.
arXiv Detail & Related papers (2023-05-24T21:41:08Z) - Deep Partial Multi-Label Learning with Graph Disambiguation [27.908565535292723]
We propose a novel deep Partial multi-Label model with grAph-disambIguatioN (PLAIN)
Specifically, we introduce the instance-level and label-level similarities to recover label confidences.
At each training epoch, labels are propagated on the instance and label graphs to produce relatively accurate pseudo-labels.
arXiv Detail & Related papers (2023-05-10T04:02:08Z) - Complementary to Multiple Labels: A Correlation-Aware Correction
Approach [65.59584909436259]
We show theoretically how the estimated transition matrix in multi-class CLL could be distorted in multi-labeled cases.
We propose a two-step method to estimate the transition matrix from candidate labels.
arXiv Detail & Related papers (2023-02-25T04:48:48Z) - One Positive Label is Sufficient: Single-Positive Multi-Label Learning
with Label Enhancement [71.9401831465908]
We investigate single-positive multi-label learning (SPMLL) where each example is annotated with only one relevant label.
A novel method named proposed, i.e., Single-positive MultI-label learning with Label Enhancement, is proposed.
Experiments on benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-06-01T14:26:30Z) - Acknowledging the Unknown for Multi-label Learning with Single Positive
Labels [65.5889334964149]
Traditionally, all unannotated labels are assumed as negative labels in single positive multi-label learning (SPML)
We propose entropy-maximization (EM) loss to maximize the entropy of predicted probabilities for all unannotated labels.
Considering the positive-negative label imbalance of unannotated labels, we propose asymmetric pseudo-labeling (APL) with asymmetric-tolerance strategies and a self-paced procedure to provide more precise supervision.
arXiv Detail & Related papers (2022-03-30T11:43:59Z) - Instance-Dependent Partial Label Learning [69.49681837908511]
Partial label learning is a typical weakly supervised learning problem.
Most existing approaches assume that the incorrect labels in each training example are randomly picked as the candidate labels.
In this paper, we consider instance-dependent and assume that each example is associated with a latent label distribution constituted by the real number of each label.
arXiv Detail & Related papers (2021-10-25T12:50:26Z) - Recovering Accurate Labeling Information from Partially Valid Data for
Effective Multi-Label Learning [23.665227794132566]
Partial Multi-label Learning (PML) aims to induce the multi-label predictor from datasets with noisy supervision.
We develop a novel two-stage PML method, namely emphunderlinePartial underlineMulti-underlineLabel underlineLabel, where in the first stage, it estimates the label enrichment with unconstrained label propagation.
Experimental results validate that baby outperforms the state-of-the-art PML methods.
arXiv Detail & Related papers (2020-06-20T04:13:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.