Active Refinement for Multi-Label Learning: A Pseudo-Label Approach
- URL: http://arxiv.org/abs/2109.14676v1
- Date: Wed, 29 Sep 2021 19:17:05 GMT
- Title: Active Refinement for Multi-Label Learning: A Pseudo-Label Approach
- Authors: Cheng-Yu Hsieh, Wei-I Lin, Miao Xu, Gang Niu, Hsuan-Tien Lin, Masashi
Sugiyama
- Abstract summary: Multi-label learning (MLL) aims to associate a given instance with its relevant labels from a set of concepts.
Previous works of MLL mainly focused on the setting where the concept set is assumed to be fixed.
Many real-world applications require introducing new concepts into the set to meet new demands.
- Score: 84.52793080276048
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of multi-label learning (MLL) is to associate a given instance with
its relevant labels from a set of concepts. Previous works of MLL mainly
focused on the setting where the concept set is assumed to be fixed, while many
real-world applications require introducing new concepts into the set to meet
new demands. One common need is to refine the original coarse concepts and
split them into finer-grained ones, where the refinement process typically
begins with limited labeled data for the finer-grained concepts. To address the
need, we formalize the problem into a special weakly supervised MLL problem to
not only learn the fine-grained concepts efficiently but also allow interactive
queries to strategically collect more informative annotations to further
improve the classifier. The key idea within our approach is to learn to assign
pseudo-labels to the unlabeled entries, and in turn leverage the pseudo-labels
to train the underlying classifier and to inform a better query strategy.
Experimental results demonstrate that our pseudo-label approach is able to
accurately recover the missing ground truth, boosting the prediction
performance significantly over the baseline methods and facilitating a
competitive active learning strategy.
Related papers
- Robust Representation Learning for Unreliable Partial Label Learning [86.909511808373]
Partial Label Learning (PLL) is a type of weakly supervised learning where each training instance is assigned a set of candidate labels, but only one label is the ground-truth.
This is known as Unreliable Partial Label Learning (UPLL) that introduces an additional complexity due to the inherent unreliability and ambiguity of partial labels.
We propose the Unreliability-Robust Representation Learning framework (URRL) that leverages unreliability-robust contrastive learning to help the model fortify against unreliable partial labels effectively.
arXiv Detail & Related papers (2023-08-31T13:37:28Z) - Semantic Contrastive Bootstrapping for Single-positive Multi-label
Recognition [36.3636416735057]
We present a semantic contrastive bootstrapping (Scob) approach to gradually recover the cross-object relationships.
We then propose a recurrent semantic masked transformer to extract iconic object-level representations.
Extensive experimental results demonstrate that the proposed joint learning framework surpasses the state-of-the-art models.
arXiv Detail & Related papers (2023-07-15T01:59:53Z) - Class-Distribution-Aware Pseudo Labeling for Semi-Supervised Multi-Label
Learning [97.88458953075205]
Pseudo-labeling has emerged as a popular and effective approach for utilizing unlabeled data.
This paper proposes a novel solution called Class-Aware Pseudo-Labeling (CAP) that performs pseudo-labeling in a class-aware manner.
arXiv Detail & Related papers (2023-05-04T12:52:18Z) - Exploring Structured Semantic Prior for Multi Label Recognition with
Incomplete Labels [60.675714333081466]
Multi-label recognition (MLR) with incomplete labels is very challenging.
Recent works strive to explore the image-to-label correspondence in the vision-language model, ie, CLIP, to compensate for insufficient annotations.
We advocate remedying the deficiency of label supervision for the MLR with incomplete labels by deriving a structured semantic prior.
arXiv Detail & Related papers (2023-03-23T12:39:20Z) - Unsupervised Meta-Learning via Few-shot Pseudo-supervised Contrastive
Learning [72.3506897990639]
We propose a simple yet effective unsupervised meta-learning framework, coined Pseudo-supervised Contrast (PsCo) for few-shot classification.
PsCo outperforms existing unsupervised meta-learning methods under various in-domain and cross-domain few-shot classification benchmarks.
arXiv Detail & Related papers (2023-03-02T06:10:13Z) - Beyond Semantic to Instance Segmentation: Weakly-Supervised Instance
Segmentation via Semantic Knowledge Transfer and Self-Refinement [31.42799434158569]
weakly-supervised instance segmentation (WSIS) is a more challenging task because instance-wise localization using only image-level labels is difficult.
We propose a novel approach that consists of two innovative components.
First, we design a semantic knowledge transfer to obtain pseudo instance labels by transferring the knowledge of WSSS to WSIS.
Second, we propose a self-refinement method that refines the pseudo instance labels in a self-supervised scheme and employs them to the training in an online manner.
arXiv Detail & Related papers (2021-09-20T12:31:44Z) - Towards Cross-Granularity Few-Shot Learning: Coarse-to-Fine
Pseudo-Labeling with Visual-Semantic Meta-Embedding [13.063136901934865]
Few-shot learning aims at rapidly adapting to novel categories with only a handful of samples at test time.
In this paper, we advance the few-shot classification paradigm towards a more challenging scenario, i.e., cross-granularity few-shot classification.
We approximate the fine-grained data distribution by greedy clustering of each coarse-class into pseudo-fine-classes according to the similarity of image embeddings.
arXiv Detail & Related papers (2020-07-11T03:44:21Z) - Semi-Supervised Learning with Meta-Gradient [123.26748223837802]
We propose a simple yet effective meta-learning algorithm in semi-supervised learning.
We find that the proposed algorithm performs favorably against state-of-the-art methods.
arXiv Detail & Related papers (2020-07-08T08:48:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.