Enhancing Label Sharing Efficiency in Complementary-Label Learning with
Label Augmentation
- URL: http://arxiv.org/abs/2305.08344v1
- Date: Mon, 15 May 2023 04:43:14 GMT
- Title: Enhancing Label Sharing Efficiency in Complementary-Label Learning with
Label Augmentation
- Authors: Wei-I Lin, Gang Niu, Hsuan-Tien Lin, Masashi Sugiyama
- Abstract summary: We analyze the implicit sharing of complementary labels on nearby instances during training.
We propose a novel technique that enhances the sharing efficiency via complementary-label augmentation.
Our results confirm that complementary-label augmentation can systematically improve empirical performance over state-of-the-art CLL models.
- Score: 92.4959898591397
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Complementary-label Learning (CLL) is a form of weakly supervised learning
that trains an ordinary classifier using only complementary labels, which are
the classes that certain instances do not belong to. While existing CLL studies
typically use novel loss functions or training techniques to solve this
problem, few studies focus on how complementary labels collectively provide
information to train the ordinary classifier. In this paper, we fill the gap by
analyzing the implicit sharing of complementary labels on nearby instances
during training. Our analysis reveals that the efficiency of implicit label
sharing is closely related to the performance of existing CLL models. Based on
this analysis, we propose a novel technique that enhances the sharing
efficiency via complementary-label augmentation, which explicitly propagates
additional complementary labels to each instance. We carefully design the
augmentation process to enrich the data with new and accurate complementary
labels, which provide CLL models with fresh and valuable information to enhance
the sharing efficiency. We then verify our proposed technique by conducting
thorough experiments on both synthetic and real-world datasets. Our results
confirm that complementary-label augmentation can systematically improve
empirical performance over state-of-the-art CLL models.
Related papers
- Virtual Category Learning: A Semi-Supervised Learning Method for Dense
Prediction with Extremely Limited Labels [63.16824565919966]
This paper proposes to use confusing samples proactively without label correction.
A Virtual Category (VC) is assigned to each confusing sample in such a way that it can safely contribute to the model optimisation.
Our intriguing findings highlight the usage of VC learning in dense vision tasks.
arXiv Detail & Related papers (2023-12-02T16:23:52Z) - Channel-Wise Contrastive Learning for Learning with Noisy Labels [60.46434734808148]
We introduce channel-wise contrastive learning (CWCL) to distinguish authentic label information from noise.
Unlike conventional instance-wise contrastive learning (IWCL), CWCL tends to yield more nuanced and resilient features aligned with the authentic labels.
Our strategy is twofold: firstly, using CWCL to extract pertinent features to identify cleanly labeled samples, and secondly, progressively fine-tuning using these samples.
arXiv Detail & Related papers (2023-08-14T06:04:50Z) - CLImage: Human-Annotated Datasets for Complementary-Label Learning [8.335164415521838]
We develop a protocol to collect complementary labels from human annotators.
These datasets represent the very first real-world CLL datasets.
We discover that the biased-nature of human-annotated complementary labels and the difficulty to validate with only complementary labels are outstanding barriers to practical CLL.
arXiv Detail & Related papers (2023-05-15T01:48:53Z) - Complementary Labels Learning with Augmented Classes [22.460256396941528]
Complementary Labels Learning (CLL) arises in many real-world tasks such as private questions classification and online learning.
We propose a novel problem setting called Complementary Labels Learning with Augmented Classes (CLLAC)
By using unlabeled data, we propose an unbiased estimator of classification risk for CLLAC, which is guaranteed to be provably consistent.
arXiv Detail & Related papers (2022-11-19T13:55:27Z) - Transductive CLIP with Class-Conditional Contrastive Learning [68.51078382124331]
We propose Transductive CLIP, a novel framework for learning a classification network with noisy labels from scratch.
A class-conditional contrastive learning mechanism is proposed to mitigate the reliance on pseudo labels.
ensemble labels is adopted as a pseudo label updating strategy to stabilize the training of deep neural networks with noisy labels.
arXiv Detail & Related papers (2022-06-13T14:04:57Z) - Class-Aware Contrastive Semi-Supervised Learning [51.205844705156046]
We propose a general method named Class-aware Contrastive Semi-Supervised Learning (CCSSL) to improve pseudo-label quality and enhance the model's robustness in the real-world setting.
Our proposed CCSSL has significant performance improvements over the state-of-the-art SSL methods on the standard datasets CIFAR100 and STL10.
arXiv Detail & Related papers (2022-03-04T12:18:23Z) - Learning Fair Classifiers with Partially Annotated Group Labels [22.838927494573436]
We consider a more practical scenario dubbed as Algorithmic Fairness with annotated Group labels (FairPG)
We propose a simple auxiliary group assignment (CGL) that is readily applicable to any fairness-aware learning strategy.
We show that our method design is better than the vanilla pseudo-labeling strategy in terms of fairness criteria.
arXiv Detail & Related papers (2021-11-29T15:11:18Z) - Neighborhood Contrastive Learning for Novel Class Discovery [79.14767688903028]
We build a new framework, named Neighborhood Contrastive Learning, to learn discriminative representations that are important to clustering performance.
We experimentally demonstrate that these two ingredients significantly contribute to clustering performance and lead our model to outperform state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2021-06-20T17:34:55Z) - Generalized Label Enhancement with Sample Correlations [24.582764493585362]
We propose two novel label enhancement methods, i.e., Label Enhancement with Sample Correlations (LESC) and generalized Label Enhancement with Sample Correlations (gLESC)
Benefitting from the sample correlations, the proposed methods can boost the performance of label enhancement.
arXiv Detail & Related papers (2020-04-07T03:32:36Z) - Rethinking Curriculum Learning with Incremental Labels and Adaptive
Compensation [35.593312267921256]
Like humans, deep networks have been shown to learn better when samples are organized and introduced in a meaningful order or curriculum.
We propose Learning with Incremental Labels and Adaptive Compensation (LILAC), a two-phase method that incrementally increases the number of unique output labels.
arXiv Detail & Related papers (2020-01-13T21:00:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.