Two-stage Training for Learning from Label Proportions
- URL: http://arxiv.org/abs/2105.10635v1
- Date: Sat, 22 May 2021 03:55:35 GMT
- Title: Two-stage Training for Learning from Label Proportions
- Authors: Jiabin Liu, Bo Wang, Xin Shen, Zhiquan Qi, Yingjie Tian
- Abstract summary: Learning from label proportions (LLP) aims at learning an instance-level classifier with label proportions in grouped training data.
We introduce the mixup strategy and symmetric crossentropy to further reduce the label noise.
Our framework is model-agnostic, and demonstrates compelling performance improvement in extensive experiments.
- Score: 18.78148397471913
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning from label proportions (LLP) aims at learning an instance-level
classifier with label proportions in grouped training data. Existing deep
learning based LLP methods utilize end-to-end pipelines to obtain the
proportional loss with Kullback-Leibler divergence between the bag-level prior
and posterior class distributions. However, the unconstrained optimization on
this objective can hardly reach a solution in accordance with the given
proportions. Besides, concerning the probabilistic classifier, this strategy
unavoidably results in high-entropy conditional class distributions at the
instance level. These issues further degrade the performance of the
instance-level classification. In this paper, we regard these problems as noisy
pseudo labeling, and instead impose the strict proportion consistency on the
classifier with a constrained optimization as a continuous training stage for
existing LLP classifiers. In addition, we introduce the mixup strategy and
symmetric crossentropy to further reduce the label noise. Our framework is
model-agnostic, and demonstrates compelling performance improvement in
extensive experiments, when incorporated into other deep LLP models as a
post-hoc phase.
Related papers
- Forming Auxiliary High-confident Instance-level Loss to Promote Learning from Label Proportions [17.36538357653019]
Learning from label proportions (LLP) aims to train a classifier by using bags of instances and the proportions of classes within bags, rather than annotated labels for each instance.
We propose a novel LLP method, namely Learning from Label Proportions with Auxiliary High-confident Instance-level Loss (L2P-AHIL)
We show that L2P-AHIL can surpass the existing baseline methods, and the performance gain can be more significant as the bag size increases.
arXiv Detail & Related papers (2024-11-15T17:14:18Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Decoupled Prototype Learning for Reliable Test-Time Adaptation [50.779896759106784]
Test-time adaptation (TTA) is a task that continually adapts a pre-trained source model to the target domain during inference.
One popular approach involves fine-tuning model with cross-entropy loss according to estimated pseudo-labels.
This study reveals that minimizing the classification error of each sample causes the cross-entropy loss's vulnerability to label noise.
We propose a novel Decoupled Prototype Learning (DPL) method that features prototype-centric loss computation.
arXiv Detail & Related papers (2024-01-15T03:33:39Z) - Learning with Noisy Labels Using Collaborative Sample Selection and
Contrastive Semi-Supervised Learning [76.00798972439004]
Collaborative Sample Selection (CSS) removes noisy samples from identified clean set.
We introduce a co-training mechanism with a contrastive loss in semi-supervised learning.
arXiv Detail & Related papers (2023-10-24T05:37:20Z) - Flexible Distribution Alignment: Towards Long-tailed Semi-supervised Learning with Proper Calibration [18.376601653387315]
Longtailed semi-supervised learning (LTSSL) represents a practical scenario for semi-supervised applications.
This problem is often aggravated by discrepancies between labeled and unlabeled class distributions.
We introduce Flexible Distribution Alignment (FlexDA), a novel adaptive logit-adjusted loss framework.
arXiv Detail & Related papers (2023-06-07T17:50:59Z) - Class-Imbalanced Complementary-Label Learning via Weighted Loss [8.934943507699131]
Complementary-label learning (CLL) is widely used in weakly supervised classification.
It faces a significant challenge in real-world datasets when confronted with class-imbalanced training samples.
We propose a novel problem setting that enables learning from class-imbalanced complementary labels for multi-class classification.
arXiv Detail & Related papers (2022-09-28T16:02:42Z) - PercentMatch: Percentile-based Dynamic Thresholding for Multi-Label
Semi-Supervised Classification [64.39761523935613]
We propose a percentile-based threshold adjusting scheme to dynamically alter the score thresholds of positive and negative pseudo-labels for each class during the training.
We achieve strong performance on Pascal VOC2007 and MS-COCO datasets when compared to recent SSL methods.
arXiv Detail & Related papers (2022-08-30T01:27:48Z) - Learning from Label Proportions by Learning with Label Noise [30.7933303912474]
Learning from label proportions (LLP) is a weakly supervised classification problem where data points are grouped into bags.
We provide a theoretically grounded approach to LLP based on a reduction to learning with label noise.
Our approach demonstrates improved empirical performance in deep learning scenarios across multiple datasets and architectures.
arXiv Detail & Related papers (2022-03-04T18:52:21Z) - PLM: Partial Label Masking for Imbalanced Multi-label Classification [59.68444804243782]
Neural networks trained on real-world datasets with long-tailed label distributions are biased towards frequent classes and perform poorly on infrequent classes.
We propose a method, Partial Label Masking (PLM), which utilizes this ratio during training.
Our method achieves strong performance when compared to existing methods on both multi-label (MultiMNIST and MSCOCO) and single-label (imbalanced CIFAR-10 and CIFAR-100) image classification datasets.
arXiv Detail & Related papers (2021-05-22T18:07:56Z) - Adaptive Adversarial Logits Pairing [65.51670200266913]
An adversarial training solution Adversarial Logits Pairing (ALP) tends to rely on fewer high-contribution features compared with vulnerable ones.
Motivated by these observations, we design an Adaptive Adversarial Logits Pairing (AALP) solution by modifying the training process and training target of ALP.
AALP consists of an adaptive feature optimization module with Guided Dropout to systematically pursue fewer high-contribution features.
arXiv Detail & Related papers (2020-05-25T03:12:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.