Label Distribution Learning via Implicit Distribution Representation
- URL: http://arxiv.org/abs/2209.13824v1
- Date: Wed, 28 Sep 2022 04:13:53 GMT
- Title: Label Distribution Learning via Implicit Distribution Representation
- Authors: Zhuoran Zheng and Xiuyi Jia
- Abstract summary: In this paper, we introduce the implicit distribution in the label distribution learning framework to characterize the uncertainty of each label value.
Specifically, we use deep implicit representation learning to construct a label distribution matrix with Gaussian prior constraints.
Each row component of the label distribution matrix is transformed into a standard label distribution form by using the self-attention algorithm.
- Score: 12.402054374952485
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In contrast to multi-label learning, label distribution learning
characterizes the polysemy of examples by a label distribution to represent
richer semantics. In the learning process of label distribution, the training
data is collected mainly by manual annotation or label enhancement algorithms
to generate label distribution. Unfortunately, the complexity of the manual
annotation task or the inaccuracy of the label enhancement algorithm leads to
noise and uncertainty in the label distribution training set. To alleviate this
problem, we introduce the implicit distribution in the label distribution
learning framework to characterize the uncertainty of each label value.
Specifically, we use deep implicit representation learning to construct a label
distribution matrix with Gaussian prior constraints, where each row component
corresponds to the distribution estimate of each label value, and this row
component is constrained by a prior Gaussian distribution to moderate the noise
and uncertainty interference of the label distribution dataset. Finally, each
row component of the label distribution matrix is transformed into a standard
label distribution form by using the self-attention algorithm. In addition,
some approaches with regularization characteristics are conducted in the
training phase to improve the performance of the model.
Related papers
- Towards Better Performance in Incomplete LDL: Addressing Data Imbalance [48.54894491724677]
We propose textIncomplete and Imbalance Label Distribution Learning (I(2)LDL), a framework that simultaneously handles incomplete labels and imbalanced label distributions.
Our method decomposes the label distribution matrix into a low-rank component for frequent labels and a sparse component for rare labels, effectively capturing the structure of both head and tail labels.
arXiv Detail & Related papers (2024-10-17T14:12:57Z) - Online Multi-Label Classification under Noisy and Changing Label Distribution [9.17381554071824]
We propose an online multi-label classification algorithm under Noisy and Changing Label Distribution (NCLD)
The objective is to simultaneously model the label scoring and the label ranking for high accuracy, whose robustness to NCLD benefits from three novel works.
arXiv Detail & Related papers (2024-10-03T11:16:43Z) - Inaccurate Label Distribution Learning with Dependency Noise [52.08553913094809]
We introduce the Dependent Noise-based Inaccurate Label Distribution Learning (DN-ILDL) framework to tackle the challenges posed by noise in label distribution learning.
We show that DN-ILDL effectively addresses the ILDL problem and outperforms existing LDL methods.
arXiv Detail & Related papers (2024-05-26T07:58:07Z) - Class-Distribution-Aware Pseudo Labeling for Semi-Supervised Multi-Label
Learning [97.88458953075205]
Pseudo-labeling has emerged as a popular and effective approach for utilizing unlabeled data.
This paper proposes a novel solution called Class-Aware Pseudo-Labeling (CAP) that performs pseudo-labeling in a class-aware manner.
arXiv Detail & Related papers (2023-05-04T12:52:18Z) - Dist-PU: Positive-Unlabeled Learning from a Label Distribution
Perspective [89.5370481649529]
We propose a label distribution perspective for PU learning in this paper.
Motivated by this, we propose to pursue the label distribution consistency between predicted and ground-truth label distributions.
Experiments on three benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-12-06T07:38:29Z) - Label distribution learning via label correlation grid [9.340734188957727]
We propose a textbfLabel textbfCorrelation textbfGrid (LCG) to model the uncertainty of label relationships.
Our network learns the LCG to accurately estimate the label distribution for each instance.
arXiv Detail & Related papers (2022-10-15T03:58:15Z) - Instance-Dependent Partial Label Learning [69.49681837908511]
Partial label learning is a typical weakly supervised learning problem.
Most existing approaches assume that the incorrect labels in each training example are randomly picked as the candidate labels.
In this paper, we consider instance-dependent and assume that each example is associated with a latent label distribution constituted by the real number of each label.
arXiv Detail & Related papers (2021-10-25T12:50:26Z) - Distribution-Aware Semantics-Oriented Pseudo-label for Imbalanced
Semi-Supervised Learning [80.05441565830726]
This paper addresses imbalanced semi-supervised learning, where heavily biased pseudo-labels can harm the model performance.
We propose a general pseudo-labeling framework to address the bias motivated by this observation.
We term the novel pseudo-labeling framework for imbalanced SSL as Distribution-Aware Semantics-Oriented (DASO) Pseudo-label.
arXiv Detail & Related papers (2021-06-10T11:58:25Z) - Probabilistic Decoupling of Labels in Classification [4.865747672937677]
We develop a principled, probabilistic, unified approach to non-standard classification tasks.
We train a classifier on the given labels to predict the label-distribution.
We then infer the underlying class-distributions by variationally optimizing a model of label-class transitions.
arXiv Detail & Related papers (2020-06-16T10:07:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.