Weakly Supervised Classification Using Group-Level Labels
- URL: http://arxiv.org/abs/2108.07330v1
- Date: Mon, 16 Aug 2021 20:01:45 GMT
- Title: Weakly Supervised Classification Using Group-Level Labels
- Authors: Guruprasad Nayak, Rahul Ghosh, Xiaowei Jia, Vipin Kumar
- Abstract summary: We propose methods to use group-level binary labels as weak supervision to train instance-level binary classification models.
We model group-level labels as Class Conditional Noisy (CCN) labels for individual instances and use the noisy labels to regularize predictions of the model trained on the strongly-labeled instances.
- Score: 12.285265254225166
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In many applications, finding adequate labeled data to train predictive
models is a major challenge. In this work, we propose methods to use
group-level binary labels as weak supervision to train instance-level binary
classification models. Aggregate labels are common in several domains where
annotating on a group-level might be cheaper or might be the only way to
provide annotated data without infringing on privacy. We model group-level
labels as Class Conditional Noisy (CCN) labels for individual instances and use
the noisy labels to regularize predictions of the model trained on the
strongly-labeled instances. Our experiments on real-world application of land
cover mapping shows the utility of the proposed method in leveraging
group-level labels, both in the presence and absence of class imbalance.
Related papers
- Semi-Supervised Hierarchical Multi-Label Classifier Based on Local Information [1.6574413179773761]
Semi-supervised hierarchical multi-label classifier based on local information (SSHMC-BLI)
SSHMC-BLI builds pseudo-labels for each unlabeled instance from the paths of labels of its labeled neighbors.
Experiments on 12 challenging datasets from functional genomics show that making use of unlabeled along with labeled data can help to improve the performance of a supervised hierarchical classifier trained only on labeled data.
arXiv Detail & Related papers (2024-04-30T20:16:40Z) - Substituting Data Annotation with Balanced Updates and Collective Loss
in Multi-label Text Classification [19.592985329023733]
Multi-label text classification (MLTC) is the task of assigning multiple labels to a given text.
We study the MLTC problem in annotation-free and scarce-annotation settings in which the magnitude of available supervision signals is linear to the number of labels.
Our method follows three steps, (1) mapping input text into a set of preliminary label likelihoods by natural language inference using a pre-trained language model, (2) calculating a signed label dependency graph by label descriptions, and (3) updating the preliminary label likelihoods with message passing along the label dependency graph.
arXiv Detail & Related papers (2023-09-24T04:12:52Z) - Soft Curriculum for Learning Conditional GANs with Noisy-Labeled and
Uncurated Unlabeled Data [70.25049762295193]
We introduce a novel conditional image generation framework that accepts noisy-labeled and uncurated data during training.
We propose soft curriculum learning, which assigns instance-wise weights for adversarial training while assigning new labels for unlabeled data.
Our experiments show that our approach outperforms existing semi-supervised and label-noise robust methods in terms of both quantitative and qualitative performance.
arXiv Detail & Related papers (2023-07-17T08:31:59Z) - Class-Distribution-Aware Pseudo Labeling for Semi-Supervised Multi-Label
Learning [97.88458953075205]
Pseudo-labeling has emerged as a popular and effective approach for utilizing unlabeled data.
This paper proposes a novel solution called Class-Aware Pseudo-Labeling (CAP) that performs pseudo-labeling in a class-aware manner.
arXiv Detail & Related papers (2023-05-04T12:52:18Z) - Bridging the Gap between Model Explanations in Partially Annotated
Multi-label Classification [85.76130799062379]
We study how false negative labels affect the model's explanation.
We propose to boost the attribution scores of the model trained with partial labels to make its explanation resemble that of the model trained with full labels.
arXiv Detail & Related papers (2023-04-04T14:00:59Z) - An Effective Approach for Multi-label Classification with Missing Labels [8.470008570115146]
We propose a pseudo-label based approach to reduce the cost of annotation without bringing additional complexity to the classification networks.
By designing a novel loss function, we are able to relax the requirement that each instance must contain at least one positive label.
We show that our method can handle the imbalance between positive labels and negative labels, while still outperforming existing missing-label learning approaches.
arXiv Detail & Related papers (2022-10-24T23:13:57Z) - Binary Classification with Positive Labeling Sources [71.37692084951355]
We propose WEAPO, a simple yet competitive WS method for producing training labels without negative labeling sources.
We show WEAPO achieves the highest averaged performance on 10 benchmark datasets.
arXiv Detail & Related papers (2022-08-02T19:32:08Z) - Transductive CLIP with Class-Conditional Contrastive Learning [68.51078382124331]
We propose Transductive CLIP, a novel framework for learning a classification network with noisy labels from scratch.
A class-conditional contrastive learning mechanism is proposed to mitigate the reliance on pseudo labels.
ensemble labels is adopted as a pseudo label updating strategy to stabilize the training of deep neural networks with noisy labels.
arXiv Detail & Related papers (2022-06-13T14:04:57Z) - Semi-Supervised Cascaded Clustering for Classification of Noisy Label
Data [0.3441021278275805]
The performance of supervised classification techniques often deteriorates when the data has noisy labels.
Most of the approaches addressing the noisy label data rely on deep neural networks (DNN) that require huge datasets for classification tasks.
We propose a semi-supervised cascaded clustering algorithm to extract patterns and generate a cascaded tree of classes in such datasets.
arXiv Detail & Related papers (2022-05-04T17:42:22Z) - Learning with Noisy Labels by Targeted Relabeling [52.0329205268734]
Crowdsourcing platforms are often used to collect datasets for training deep neural networks.
We propose an approach which reserves a fraction of annotations to explicitly relabel highly probable labeling errors.
arXiv Detail & Related papers (2021-10-15T20:37:29Z) - Evaluating Multi-label Classifiers with Noisy Labels [0.7868449549351487]
In the real world, it is more common to deal with noisy datasets than clean datasets.
We present a Context-Based Multi-Label-Classifier (CbMLC) that effectively handles noisy labels.
We show CbMLC yields substantial improvements over the previous methods in most cases.
arXiv Detail & Related papers (2021-02-16T19:50:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.