Inaccurate Label Distribution Learning
- URL: http://arxiv.org/abs/2302.13000v2
- Date: Sat, 26 Aug 2023 06:20:33 GMT
- Title: Inaccurate Label Distribution Learning
- Authors: Zhiqiang Kou, Yuheng Jia, Jing Wang, Xin Geng
- Abstract summary: Label distribution learning (LDL) trains a model to predict the relevance of a set of labels (called label distribution (LD)) to an instance.
This paper investigates the problem of inaccurate LDL, i.e., developing an LDL model with noisy LDs.
- Score: 56.89970970094207
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Label distribution learning (LDL) trains a model to predict the relevance of
a set of labels (called label distribution (LD)) to an instance. The previous
LDL methods all assumed the LDs of the training instances are accurate.
However, annotating highly accurate LDs for training instances is
time-consuming and very expensive, and in reality the collected LD is usually
inaccurate and disturbed by annotating errors. For the first time, this paper
investigates the problem of inaccurate LDL, i.e., developing an LDL model with
noisy LDs. We assume that the noisy LD matrix is a linear combination of an
ideal LD matrix and a sparse noise matrix. Consequently, the problem of
inaccurate LDL becomes an inverse problem, where the objective is to recover
the ideal LD and noise matrices from the noisy LDs. We hypothesize that the
ideal LD matrix is low-rank due to the correlation of labels and utilize the
local geometric structure of instances captured by a graph to assist in
recovering the ideal LD. This is based on the premise that similar instances
are likely to share the same LD. The proposed model is finally formulated as a
graph-regularized low-rank and sparse decomposition problem and numerically
solved by the alternating direction method of multipliers. Furthermore, a
specialized objective function is utilized to induce a LD predictive model in
LDL, taking into account the recovered label distributions. Extensive
experiments conducted on multiple datasets from various real-world tasks
effectively demonstrate the efficacy of the proposed approach. \end{abstract}
Related papers
- Towards Better Performance in Incomplete LDL: Addressing Data Imbalance [48.54894491724677]
We propose textIncomplete and Imbalance Label Distribution Learning (I(2)LDL), a framework that simultaneously handles incomplete labels and imbalanced label distributions.
Our method decomposes the label distribution matrix into a low-rank component for frequent labels and a sparse component for rare labels, effectively capturing the structure of both head and tail labels.
arXiv Detail & Related papers (2024-10-17T14:12:57Z) - Inaccurate Label Distribution Learning with Dependency Noise [52.08553913094809]
We introduce the Dependent Noise-based Inaccurate Label Distribution Learning (DN-ILDL) framework to tackle the challenges posed by noise in label distribution learning.
We show that DN-ILDL effectively addresses the ILDL problem and outperforms existing LDL methods.
arXiv Detail & Related papers (2024-05-26T07:58:07Z) - Data Augmentation For Label Enhancement [45.3351754830424]
Label enhancement (LE) has emerged to recover Label Distribution (LD) from logical label.
We propose a novel supervised LE dimensionality reduction approach, which projects the original data into a lower dimensional feature space.
The results show that our method consistently outperforms the other five comparing approaches.
arXiv Detail & Related papers (2023-03-21T09:36:58Z) - Label Distribution Learning from Logical Label [19.632157794117553]
Label distribution learning (LDL) is an effective method to predict the label description degree (a.k.a. label distribution) of a sample.
But annotating label distribution for training samples is extremely costly.
We propose a novel method to learn an LDL model directly from the logical label, which unifies LE and LDL into a joint model.
arXiv Detail & Related papers (2023-03-13T04:31:35Z) - Full Kullback-Leibler-Divergence Loss for Hyperparameter-free Label
Distribution Learning [3.0745536448480326]
The concept of Label Distribution Learning (LDL) is a technique to stabilize classification and regression problems.
The main idea is the joint regression of the label distribution and its expectation value.
We introduce a loss function for DLDL whose components are completely defined by Kullback-Leibler divergences.
arXiv Detail & Related papers (2022-09-05T17:01:46Z) - Solving Multistage Stochastic Linear Programming via Regularized Linear
Decision Rules: An Application to Hydrothermal Dispatch Planning [77.34726150561087]
We propose a novel regularization scheme for linear decision rules (LDR) based on the AdaSO (adaptive least absolute shrinkage and selection operator)
Experiments show that the overfit threat is non-negligible when using the classical non-regularized LDR to solve MSLP.
For the LHDP problem, our analysis highlights the following benefits of the proposed framework in comparison to the non-regularized benchmark.
arXiv Detail & Related papers (2021-10-07T02:36:14Z) - Bidirectional Loss Function for Label Enhancement and Distribution
Learning [23.61708127340584]
Two challenges exist in LDL: how to address the dimensional gap problem during the learning process and how to recover label distributions from logical labels.
This study considers bidirectional projections function which can be applied in LE and LDL problems simultaneously.
Experiments on several real-world datasets are carried out to demonstrate the superiority of the proposed method for both LE and LDL.
arXiv Detail & Related papers (2020-07-07T03:02:54Z) - Localized Debiased Machine Learning: Efficient Inference on Quantile
Treatment Effects and Beyond [69.83813153444115]
We consider an efficient estimating equation for the (local) quantile treatment effect ((L)QTE) in causal inference.
Debiased machine learning (DML) is a data-splitting approach to estimating high-dimensional nuisances.
We propose localized debiased machine learning (LDML), which avoids this burdensome step.
arXiv Detail & Related papers (2019-12-30T14:42:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.