Recovering Accurate Labeling Information from Partially Valid Data for
Effective Multi-Label Learning
- URL: http://arxiv.org/abs/2006.11488v1
- Date: Sat, 20 Jun 2020 04:13:24 GMT
- Title: Recovering Accurate Labeling Information from Partially Valid Data for
Effective Multi-Label Learning
- Authors: Ximing Li, Yang Wang
- Abstract summary: Partial Multi-label Learning (PML) aims to induce the multi-label predictor from datasets with noisy supervision.
We develop a novel two-stage PML method, namely emphunderlinePartial underlineMulti-underlineLabel underlineLabel, where in the first stage, it estimates the label enrichment with unconstrained label propagation.
Experimental results validate that baby outperforms the state-of-the-art PML methods.
- Score: 23.665227794132566
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Partial Multi-label Learning (PML) aims to induce the multi-label predictor
from datasets with noisy supervision, where each training instance is
associated with several candidate labels but only partially valid. To address
the noisy issue, the existing PML methods basically recover the ground-truth
labels by leveraging the ground-truth confidence of the candidate label, \ie
the likelihood of a candidate label being a ground-truth one. However, they
neglect the information from non-candidate labels, which potentially
contributes to the ground-truth label recovery. In this paper, we propose to
recover the ground-truth labels, \ie estimating the ground-truth confidences,
from the label enrichment, composed of the relevance degrees of candidate
labels and irrelevance degrees of non-candidate labels. Upon this observation,
we further develop a novel two-stage PML method, namely
\emph{\underline{P}artial \underline{M}ulti-\underline{L}abel
\underline{L}earning with \underline{L}abel
\underline{E}nrichment-\underline{R}ecovery} (\baby), where in the first stage,
it estimates the label enrichment with unconstrained label propagation, then
jointly learns the ground-truth confidence and multi-label predictor given the
label enrichment. Experimental results validate that \baby outperforms the
state-of-the-art PML methods.
Related papers
- Leveraging Label Semantics and Meta-Label Refinement for Multi-Label Question Classification [11.19022605804112]
This paper introduces RR2QC, a novel Retrieval Reranking method To multi-label Question Classification.
It uses label semantics and meta-label refinement to enhance personalized learning and resource recommendation.
Experimental results demonstrate that RR2QC outperforms existing classification methods in Precision@k and F1 scores.
arXiv Detail & Related papers (2024-11-04T06:27:14Z) - Online Multi-Label Classification under Noisy and Changing Label Distribution [9.17381554071824]
We propose an online multi-label classification algorithm under Noisy and Changing Label Distribution (NCLD)
The objective is to simultaneously model the label scoring and the label ranking for high accuracy, whose robustness to NCLD benefits from three novel works.
arXiv Detail & Related papers (2024-10-03T11:16:43Z) - Partial-Label Regression [54.74984751371617]
Partial-label learning is a weakly supervised learning setting that allows each training example to be annotated with a set of candidate labels.
Previous studies on partial-label learning only focused on the classification setting where candidate labels are all discrete.
In this paper, we provide the first attempt to investigate partial-label regression, where each training example is annotated with a set of real-valued candidate labels.
arXiv Detail & Related papers (2023-06-15T09:02:24Z) - Complementary Classifier Induced Partial Label Learning [54.61668156386079]
In partial label learning (PLL), each training sample is associated with a set of candidate labels, among which only one is valid.
In disambiguation, the existing works usually do not fully investigate the effectiveness of the non-candidate label set.
In this paper, we use the non-candidate labels to induce a complementary classifier, which naturally forms an adversarial relationship against the traditional classifier.
arXiv Detail & Related papers (2023-05-17T02:13:23Z) - Deep Partial Multi-Label Learning with Graph Disambiguation [27.908565535292723]
We propose a novel deep Partial multi-Label model with grAph-disambIguatioN (PLAIN)
Specifically, we introduce the instance-level and label-level similarities to recover label confidences.
At each training epoch, labels are propagated on the instance and label graphs to produce relatively accurate pseudo-labels.
arXiv Detail & Related papers (2023-05-10T04:02:08Z) - Dist-PU: Positive-Unlabeled Learning from a Label Distribution
Perspective [89.5370481649529]
We propose a label distribution perspective for PU learning in this paper.
Motivated by this, we propose to pursue the label distribution consistency between predicted and ground-truth label distributions.
Experiments on three benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-12-06T07:38:29Z) - One Positive Label is Sufficient: Single-Positive Multi-Label Learning
with Label Enhancement [71.9401831465908]
We investigate single-positive multi-label learning (SPMLL) where each example is annotated with only one relevant label.
A novel method named proposed, i.e., Single-positive MultI-label learning with Label Enhancement, is proposed.
Experiments on benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-06-01T14:26:30Z) - Acknowledging the Unknown for Multi-label Learning with Single Positive
Labels [65.5889334964149]
Traditionally, all unannotated labels are assumed as negative labels in single positive multi-label learning (SPML)
We propose entropy-maximization (EM) loss to maximize the entropy of predicted probabilities for all unannotated labels.
Considering the positive-negative label imbalance of unannotated labels, we propose asymmetric pseudo-labeling (APL) with asymmetric-tolerance strategies and a self-paced procedure to provide more precise supervision.
arXiv Detail & Related papers (2022-03-30T11:43:59Z) - Instance-Dependent Partial Label Learning [69.49681837908511]
Partial label learning is a typical weakly supervised learning problem.
Most existing approaches assume that the incorrect labels in each training example are randomly picked as the candidate labels.
In this paper, we consider instance-dependent and assume that each example is associated with a latent label distribution constituted by the real number of each label.
arXiv Detail & Related papers (2021-10-25T12:50:26Z) - Partial Multi-label Learning with Label and Feature Collaboration [21.294791188490056]
Partial multi-label learning (PML) models the scenario where each training instance is annotated with a set of candidate labels.
To achieve a credible predictor on PML data, we propose PML-LFC (Partial Multi-label Learning with Label and Feature Collaboration)
PML-LFC estimates the confidence values of relevant labels for each instance using the similarity from both the label and feature spaces.
arXiv Detail & Related papers (2020-03-17T08:34:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.