Forming Auxiliary High-confident Instance-level Loss to Promote Learning from Label Proportions
- URL: http://arxiv.org/abs/2411.10364v1
- Date: Fri, 15 Nov 2024 17:14:18 GMT
- Title: Forming Auxiliary High-confident Instance-level Loss to Promote Learning from Label Proportions
- Authors: Tianhao Ma, Han Chen, Juncheng Hu, Yungang Zhu, Ximing Li,
- Abstract summary: Learning from label proportions (LLP) aims to train a classifier by using bags of instances and the proportions of classes within bags, rather than annotated labels for each instance.
We propose a novel LLP method, namely Learning from Label Proportions with Auxiliary High-confident Instance-level Loss (L2P-AHIL)
We show that L2P-AHIL can surpass the existing baseline methods, and the performance gain can be more significant as the bag size increases.
- Score: 17.36538357653019
- License:
- Abstract: Learning from label proportions (LLP), i.e., a challenging weakly-supervised learning task, aims to train a classifier by using bags of instances and the proportions of classes within bags, rather than annotated labels for each instance. Beyond the traditional bag-level loss, the mainstream methodology of LLP is to incorporate an auxiliary instance-level loss with pseudo-labels formed by predictions. Unfortunately, we empirically observed that the pseudo-labels are are often inaccurate due to over-smoothing, especially for the scenarios with large bag sizes, hurting the classifier induction. To alleviate this problem, we suggest a novel LLP method, namely Learning from Label Proportions with Auxiliary High-confident Instance-level Loss (L^2P-AHIL). Specifically, we propose a dual entropy-based weight (DEW) method to adaptively measure the confidences of pseudo-labels. It simultaneously emphasizes accurate predictions at the bag level and avoids overly smoothed predictions. We then form high-confident instance-level loss with DEW, and jointly optimize it with the bag-level loss in a self-training manner. The experimental results on benchmark datasets show that L^2P-AHIL can surpass the existing baseline methods, and the performance gain can be more significant as the bag size increases.
Related papers
- Theoretical Proportion Label Perturbation for Learning from Label Proportions in Large Bags [5.842419815638353]
Learning from label proportions (LLP) is a weakly supervised learning that trains an instance-level classifier from label proportions of bags.
A challenge in LLP arises when the number of instances in a bag (bag size) is numerous, making the traditional LLP methods difficult due to GPU memory limitations.
This study aims to develop an LLP method capable of learning from bags with large sizes.
arXiv Detail & Related papers (2024-08-26T09:24:36Z) - Virtual Category Learning: A Semi-Supervised Learning Method for Dense
Prediction with Extremely Limited Labels [63.16824565919966]
This paper proposes to use confusing samples proactively without label correction.
A Virtual Category (VC) is assigned to each confusing sample in such a way that it can safely contribute to the model optimisation.
Our intriguing findings highlight the usage of VC learning in dense vision tasks.
arXiv Detail & Related papers (2023-12-02T16:23:52Z) - MixBag: Bag-Level Data Augmentation for Learning from Label Proportions [4.588028371034407]
Learning from label proportions (LLP) is a promising weakly supervised learning problem.
We propose a bag-level data augmentation method for LLP called MixBag.
arXiv Detail & Related papers (2023-08-17T07:06:50Z) - Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning [59.44422468242455]
We propose a novel method dubbed ShrinkMatch to learn uncertain samples.
For each uncertain sample, it adaptively seeks a shrunk class space, which merely contains the original top-1 class.
We then impose a consistency regularization between a pair of strongly and weakly augmented samples in the shrunk space to strive for discriminative representations.
arXiv Detail & Related papers (2023-08-13T14:05:24Z) - Easy Learning from Label Proportions [17.71834385754893]
Easyllp is a flexible and simple-to-implement debiasing approach based on aggregate labels.
Our technique allows us to accurately estimate the expected loss of an arbitrary model at an individual level.
arXiv Detail & Related papers (2023-02-06T20:41:38Z) - SoftMatch: Addressing the Quantity-Quality Trade-off in Semi-supervised
Learning [101.86916775218403]
This paper revisits the popular pseudo-labeling methods via a unified sample weighting formulation.
We propose SoftMatch to overcome the trade-off by maintaining both high quantity and high quality of pseudo-labels during training.
In experiments, SoftMatch shows substantial improvements across a wide variety of benchmarks, including image, text, and imbalanced classification.
arXiv Detail & Related papers (2023-01-26T03:53:25Z) - PercentMatch: Percentile-based Dynamic Thresholding for Multi-Label
Semi-Supervised Classification [64.39761523935613]
We propose a percentile-based threshold adjusting scheme to dynamically alter the score thresholds of positive and negative pseudo-labels for each class during the training.
We achieve strong performance on Pascal VOC2007 and MS-COCO datasets when compared to recent SSL methods.
arXiv Detail & Related papers (2022-08-30T01:27:48Z) - Semi-supervised Object Detection via Virtual Category Learning [68.26956850996976]
This paper proposes to use confusing samples proactively without label correction.
Specifically, a virtual category (VC) is assigned to each confusing sample.
It is attributed to specifying the embedding distance between the training sample and the virtual category.
arXiv Detail & Related papers (2022-07-07T16:59:53Z) - Learning from Label Proportions by Learning with Label Noise [30.7933303912474]
Learning from label proportions (LLP) is a weakly supervised classification problem where data points are grouped into bags.
We provide a theoretically grounded approach to LLP based on a reduction to learning with label noise.
Our approach demonstrates improved empirical performance in deep learning scenarios across multiple datasets and architectures.
arXiv Detail & Related papers (2022-03-04T18:52:21Z) - L2B: Learning to Bootstrap Robust Models for Combating Label Noise [52.02335367411447]
This paper introduces a simple and effective method, named Learning to Bootstrap (L2B)
It enables models to bootstrap themselves using their own predictions without being adversely affected by erroneous pseudo-labels.
It achieves this by dynamically adjusting the importance weight between real observed and generated labels, as well as between different samples through meta-learning.
arXiv Detail & Related papers (2022-02-09T05:57:08Z) - Two-stage Training for Learning from Label Proportions [18.78148397471913]
Learning from label proportions (LLP) aims at learning an instance-level classifier with label proportions in grouped training data.
We introduce the mixup strategy and symmetric crossentropy to further reduce the label noise.
Our framework is model-agnostic, and demonstrates compelling performance improvement in extensive experiments.
arXiv Detail & Related papers (2021-05-22T03:55:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.