Towards Better Performance in Incomplete LDL: Addressing Data Imbalance
- URL: http://arxiv.org/abs/2410.13579v1
- Date: Thu, 17 Oct 2024 14:12:57 GMT
- Title: Towards Better Performance in Incomplete LDL: Addressing Data Imbalance
- Authors: Zhiqiang Kou, Haoyuan Xuan, Jing Wang, Yuheng Jia, Xin Geng,
- Abstract summary: We propose textIncomplete and Imbalance Label Distribution Learning (I(2)LDL), a framework that simultaneously handles incomplete labels and imbalanced label distributions.
Our method decomposes the label distribution matrix into a low-rank component for frequent labels and a sparse component for rare labels, effectively capturing the structure of both head and tail labels.
- Score: 48.54894491724677
- License:
- Abstract: Label Distribution Learning (LDL) is a novel machine learning paradigm that addresses the problem of label ambiguity and has found widespread applications. Obtaining complete label distributions in real-world scenarios is challenging, which has led to the emergence of Incomplete Label Distribution Learning (InLDL). However, the existing InLDL methods overlook a crucial aspect of LDL data: the inherent imbalance in label distributions. To address this limitation, we propose \textbf{Incomplete and Imbalance Label Distribution Learning (I\(^2\)LDL)}, a framework that simultaneously handles incomplete labels and imbalanced label distributions. Our method decomposes the label distribution matrix into a low-rank component for frequent labels and a sparse component for rare labels, effectively capturing the structure of both head and tail labels. We optimize the model using the Alternating Direction Method of Multipliers (ADMM) and derive generalization error bounds via Rademacher complexity, providing strong theoretical guarantees. Extensive experiments on 15 real-world datasets demonstrate the effectiveness and robustness of our proposed framework compared to existing InLDL methods.
Related papers
- Inaccurate Label Distribution Learning with Dependency Noise [52.08553913094809]
We introduce the Dependent Noise-based Inaccurate Label Distribution Learning (DN-ILDL) framework to tackle the challenges posed by noise in label distribution learning.
We show that DN-ILDL effectively addresses the ILDL problem and outperforms existing LDL methods.
arXiv Detail & Related papers (2024-05-26T07:58:07Z) - Robust Representation Learning for Unreliable Partial Label Learning [86.909511808373]
Partial Label Learning (PLL) is a type of weakly supervised learning where each training instance is assigned a set of candidate labels, but only one label is the ground-truth.
This is known as Unreliable Partial Label Learning (UPLL) that introduces an additional complexity due to the inherent unreliability and ambiguity of partial labels.
We propose the Unreliability-Robust Representation Learning framework (URRL) that leverages unreliability-robust contrastive learning to help the model fortify against unreliable partial labels effectively.
arXiv Detail & Related papers (2023-08-31T13:37:28Z) - Exploiting Multi-Label Correlation in Label Distribution Learning [0.0]
Label Distribution Learning (LDL) is a novel machine learning paradigm that assigns label distribution to each instance.
Recent studies disclosed that label distribution matrices are typically full-rank, posing challenges to works exploiting low-rank label correlation.
We introduce an auxiliary MLL process in LDL and capture low-rank label correlation on that MLL rather than LDL.
arXiv Detail & Related papers (2023-08-03T13:06:45Z) - Contrastive Label Enhancement [13.628665406039609]
We propose Contrastive Label Enhancement (ConLE) to generate high-level features by contrastive learning strategy.
We leverage the obtained high-level features to gain label distributions through a welldesigned training strategy.
arXiv Detail & Related papers (2023-05-16T14:53:07Z) - Class-Distribution-Aware Pseudo Labeling for Semi-Supervised Multi-Label
Learning [97.88458953075205]
Pseudo-labeling has emerged as a popular and effective approach for utilizing unlabeled data.
This paper proposes a novel solution called Class-Aware Pseudo-Labeling (CAP) that performs pseudo-labeling in a class-aware manner.
arXiv Detail & Related papers (2023-05-04T12:52:18Z) - Label Distribution Learning from Logical Label [19.632157794117553]
Label distribution learning (LDL) is an effective method to predict the label description degree (a.k.a. label distribution) of a sample.
But annotating label distribution for training samples is extremely costly.
We propose a novel method to learn an LDL model directly from the logical label, which unifies LE and LDL into a joint model.
arXiv Detail & Related papers (2023-03-13T04:31:35Z) - Dist-PU: Positive-Unlabeled Learning from a Label Distribution
Perspective [89.5370481649529]
We propose a label distribution perspective for PU learning in this paper.
Motivated by this, we propose to pursue the label distribution consistency between predicted and ground-truth label distributions.
Experiments on three benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-12-06T07:38:29Z) - Label distribution learning via label correlation grid [9.340734188957727]
We propose a textbfLabel textbfCorrelation textbfGrid (LCG) to model the uncertainty of label relationships.
Our network learns the LCG to accurately estimate the label distribution for each instance.
arXiv Detail & Related papers (2022-10-15T03:58:15Z) - Disentangling Sampling and Labeling Bias for Learning in Large-Output
Spaces [64.23172847182109]
We show that different negative sampling schemes implicitly trade-off performance on dominant versus rare labels.
We provide a unified means to explicitly tackle both sampling bias, arising from working with a subset of all labels, and labeling bias, which is inherent to the data due to label imbalance.
arXiv Detail & Related papers (2021-05-12T15:40:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.