Bidirectional Loss Function for Label Enhancement and Distribution
Learning
- URL: http://arxiv.org/abs/2007.03181v1
- Date: Tue, 7 Jul 2020 03:02:54 GMT
- Title: Bidirectional Loss Function for Label Enhancement and Distribution
Learning
- Authors: Xinyuan Liu, Jihua Zhu, Qinghai Zheng, Zhongyu Li, Ruixin Liu and Jun
Wang
- Abstract summary: Two challenges exist in LDL: how to address the dimensional gap problem during the learning process and how to recover label distributions from logical labels.
This study considers bidirectional projections function which can be applied in LE and LDL problems simultaneously.
Experiments on several real-world datasets are carried out to demonstrate the superiority of the proposed method for both LE and LDL.
- Score: 23.61708127340584
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Label distribution learning (LDL) is an interpretable and general learning
paradigm that has been applied in many real-world applications. In contrast to
the simple logical vector in single-label learning (SLL) and multi-label
learning (MLL), LDL assigns labels with a description degree to each instance.
In practice, two challenges exist in LDL, namely, how to address the
dimensional gap problem during the learning process of LDL and how to exactly
recover label distributions from existing logical labels, i.e., Label
Enhancement (LE). For most existing LDL and LE algorithms, the fact that the
dimension of the input matrix is much higher than that of the output one is
alway ignored and it typically leads to the dimensional reduction owing to the
unidirectional projection. The valuable information hidden in the feature space
is lost during the mapping process. To this end, this study considers
bidirectional projections function which can be applied in LE and LDL problems
simultaneously. More specifically, this novel loss function not only considers
the mapping errors generated from the projection of the input space into the
output one but also accounts for the reconstruction errors generated from the
projection of the output space back to the input one. This loss function aims
to potentially reconstruct the input data from the output data. Therefore, it
is expected to obtain more accurate results. Finally, experiments on several
real-world datasets are carried out to demonstrate the superiority of the
proposed method for both LE and LDL.
Related papers
- Towards Better Performance in Incomplete LDL: Addressing Data Imbalance [48.54894491724677]
We propose textIncomplete and Imbalance Label Distribution Learning (I(2)LDL), a framework that simultaneously handles incomplete labels and imbalanced label distributions.
Our method decomposes the label distribution matrix into a low-rank component for frequent labels and a sparse component for rare labels, effectively capturing the structure of both head and tail labels.
arXiv Detail & Related papers (2024-10-17T14:12:57Z) - Tabular Transfer Learning via Prompting LLMs [52.96022335067357]
We propose a novel framework, Prompt to Transfer (P2T), that utilizes unlabeled (or heterogeneous) source data with large language models (LLMs)
P2T identifies a column feature in a source dataset that is strongly correlated with a target task feature to create examples relevant to the target task, thus creating pseudo-demonstrations for prompts.
arXiv Detail & Related papers (2024-08-09T11:30:52Z) - Inaccurate Label Distribution Learning with Dependency Noise [52.08553913094809]
We introduce the Dependent Noise-based Inaccurate Label Distribution Learning (DN-ILDL) framework to tackle the challenges posed by noise in label distribution learning.
We show that DN-ILDL effectively addresses the ILDL problem and outperforms existing LDL methods.
arXiv Detail & Related papers (2024-05-26T07:58:07Z) - Data Augmentation For Label Enhancement [45.3351754830424]
Label enhancement (LE) has emerged to recover Label Distribution (LD) from logical label.
We propose a novel supervised LE dimensionality reduction approach, which projects the original data into a lower dimensional feature space.
The results show that our method consistently outperforms the other five comparing approaches.
arXiv Detail & Related papers (2023-03-21T09:36:58Z) - Label Distribution Learning from Logical Label [19.632157794117553]
Label distribution learning (LDL) is an effective method to predict the label description degree (a.k.a. label distribution) of a sample.
But annotating label distribution for training samples is extremely costly.
We propose a novel method to learn an LDL model directly from the logical label, which unifies LE and LDL into a joint model.
arXiv Detail & Related papers (2023-03-13T04:31:35Z) - Inaccurate Label Distribution Learning [56.89970970094207]
Label distribution learning (LDL) trains a model to predict the relevance of a set of labels (called label distribution (LD)) to an instance.
This paper investigates the problem of inaccurate LDL, i.e., developing an LDL model with noisy LDs.
arXiv Detail & Related papers (2023-02-25T06:23:45Z) - TabMixer: Excavating Label Distribution Learning with Small-scale
Features [10.498049147922258]
Label distribution learning (LDL) differs from multi-label learning which aims at representing the polysemy of instances by transforming single-label values into descriptive degrees.
Unfortunately, the feature space of the label distribution dataset is affected by human factors and the inductive bias of the feature extractor causing uncertainty in the feature space.
We model the uncertainty augmentation of the feature space to alleviate the problem in LDL tasks.
Our proposed algorithm can be competitive compared to other LDL algorithms on several benchmarks.
arXiv Detail & Related papers (2022-10-25T09:18:15Z) - Simple and Robust Loss Design for Multi-Label Learning with Missing
Labels [14.7306301893944]
We propose two simple yet effective methods via robust loss design based on an observation a model can identify missing labels during training.
The first is a novel robust loss for negatives, namely the Hill loss, which re-weights negatives in the shape of a hill to alleviate the effect of false negatives.
The second is a self-paced loss correction (SPLC) method, which uses a loss derived from the maximum likelihood criterion under an approximate distribution of missing labels.
arXiv Detail & Related papers (2021-12-13T11:39:19Z) - Gradient Imitation Reinforcement Learning for Low Resource Relation
Extraction [52.63803634033647]
Low-resource relation Extraction (LRE) aims to extract relation facts from limited labeled corpora when human annotation is scarce.
We develop a Gradient Imitation Reinforcement Learning method to encourage pseudo label data to imitate the gradient descent direction on labeled data.
We also propose a framework called GradLRE, which handles two major scenarios in low-resource relation extraction.
arXiv Detail & Related papers (2021-09-14T03:51:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.