Generalized Label Enhancement with Sample Correlations
- URL: http://arxiv.org/abs/2004.03104v3
- Date: Mon, 12 Apr 2021 02:47:35 GMT
- Title: Generalized Label Enhancement with Sample Correlations
- Authors: Qinghai Zheng, Jihua Zhu, Haoyu Tang, Xinyuan Liu, Zhongyu Li, and
Huimin Lu
- Abstract summary: We propose two novel label enhancement methods, i.e., Label Enhancement with Sample Correlations (LESC) and generalized Label Enhancement with Sample Correlations (gLESC)
Benefitting from the sample correlations, the proposed methods can boost the performance of label enhancement.
- Score: 24.582764493585362
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, label distribution learning (LDL) has drawn much attention in
machine learning, where LDL model is learned from labelel instances. Different
from single-label and multi-label annotations, label distributions describe the
instance by multiple labels with different intensities and accommodate to more
general scenes. Since most existing machine learning datasets merely provide
logical labels, label distributions are unavailable in many real-world
applications. To handle this problem, we propose two novel label enhancement
methods, i.e., Label Enhancement with Sample Correlations (LESC) and
generalized Label Enhancement with Sample Correlations (gLESC). More
specifically, LESC employs a low-rank representation of samples in the feature
space, and gLESC leverages a tensor multi-rank minimization to further
investigate the sample correlations in both the feature space and label space.
Benefitting from the sample correlations, the proposed methods can boost the
performance of label enhancement. Extensive experiments on 14 benchmark
datasets demonstrate the effectiveness and superiority of our methods.
Related papers
- Incremental Label Distribution Learning with Scalable Graph Convolutional Networks [41.02170058889797]
We introduce Incremental Label Distribution Learning (ILDL), analyze its key issues regarding training samples and inter-label relationships.
Specifically, we develop a New-label-aware Gradient Compensation Loss to speed up the learning of new labels and represent inter-label relationships as a graph.
arXiv Detail & Related papers (2024-11-20T07:49:51Z) - Virtual Category Learning: A Semi-Supervised Learning Method for Dense
Prediction with Extremely Limited Labels [63.16824565919966]
This paper proposes to use confusing samples proactively without label correction.
A Virtual Category (VC) is assigned to each confusing sample in such a way that it can safely contribute to the model optimisation.
Our intriguing findings highlight the usage of VC learning in dense vision tasks.
arXiv Detail & Related papers (2023-12-02T16:23:52Z) - Scalable Label Distribution Learning for Multi-Label Classification [43.52928088881866]
Multi-label classification (MLC) refers to the problem of tagging a given instance with a set of relevant labels.
Most existing MLC methods are based on the assumption that the correlation of two labels in each label pair is symmetric.
Most existing methods design learning processes associated with the number of labels, which makes their computational complexity a bottleneck when scaling up to large-scale output space.
arXiv Detail & Related papers (2023-11-28T06:52:53Z) - Channel-Wise Contrastive Learning for Learning with Noisy Labels [60.46434734808148]
We introduce channel-wise contrastive learning (CWCL) to distinguish authentic label information from noise.
Unlike conventional instance-wise contrastive learning (IWCL), CWCL tends to yield more nuanced and resilient features aligned with the authentic labels.
Our strategy is twofold: firstly, using CWCL to extract pertinent features to identify cleanly labeled samples, and secondly, progressively fine-tuning using these samples.
arXiv Detail & Related papers (2023-08-14T06:04:50Z) - Contrastive Label Enhancement [13.628665406039609]
We propose Contrastive Label Enhancement (ConLE) to generate high-level features by contrastive learning strategy.
We leverage the obtained high-level features to gain label distributions through a welldesigned training strategy.
arXiv Detail & Related papers (2023-05-16T14:53:07Z) - Enhancing Label Sharing Efficiency in Complementary-Label Learning with
Label Augmentation [92.4959898591397]
We analyze the implicit sharing of complementary labels on nearby instances during training.
We propose a novel technique that enhances the sharing efficiency via complementary-label augmentation.
Our results confirm that complementary-label augmentation can systematically improve empirical performance over state-of-the-art CLL models.
arXiv Detail & Related papers (2023-05-15T04:43:14Z) - Deep Partial Multi-Label Learning with Graph Disambiguation [27.908565535292723]
We propose a novel deep Partial multi-Label model with grAph-disambIguatioN (PLAIN)
Specifically, we introduce the instance-level and label-level similarities to recover label confidences.
At each training epoch, labels are propagated on the instance and label graphs to produce relatively accurate pseudo-labels.
arXiv Detail & Related papers (2023-05-10T04:02:08Z) - Label distribution learning via label correlation grid [9.340734188957727]
We propose a textbfLabel textbfCorrelation textbfGrid (LCG) to model the uncertainty of label relationships.
Our network learns the LCG to accurately estimate the label distribution for each instance.
arXiv Detail & Related papers (2022-10-15T03:58:15Z) - One Positive Label is Sufficient: Single-Positive Multi-Label Learning
with Label Enhancement [71.9401831465908]
We investigate single-positive multi-label learning (SPMLL) where each example is annotated with only one relevant label.
A novel method named proposed, i.e., Single-positive MultI-label learning with Label Enhancement, is proposed.
Experiments on benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-06-01T14:26:30Z) - Instance-Dependent Partial Label Learning [69.49681837908511]
Partial label learning is a typical weakly supervised learning problem.
Most existing approaches assume that the incorrect labels in each training example are randomly picked as the candidate labels.
In this paper, we consider instance-dependent and assume that each example is associated with a latent label distribution constituted by the real number of each label.
arXiv Detail & Related papers (2021-10-25T12:50:26Z) - An Empirical Study on Large-Scale Multi-Label Text Classification
Including Few and Zero-Shot Labels [49.036212158261215]
Large-scale Multi-label Text Classification (LMTC) has a wide range of Natural Language Processing (NLP) applications.
Current state-of-the-art LMTC models employ Label-Wise Attention Networks (LWANs)
We show that hierarchical methods based on Probabilistic Label Trees (PLTs) outperform LWANs.
We propose a new state-of-the-art method which combines BERT with LWANs.
arXiv Detail & Related papers (2020-10-04T18:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.