General Partial Label Learning via Dual Bipartite Graph Autoencoder
- URL: http://arxiv.org/abs/2001.01290v2
- Date: Thu, 9 Sep 2021 14:40:19 GMT
- Title: General Partial Label Learning via Dual Bipartite Graph Autoencoder
- Authors: Brian Chen, Bo Wu, Alireza Zareian, Hanwang Zhang, Shih-Fu Chang
- Abstract summary: We formulate a practical yet challenging problem: General Partial Label Learning (GPLL)
We propose a novel graph autoencoder called Dual Bipartite Graph Autoencoder (DB-GAE)
- Score: 81.78871072599607
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We formulate a practical yet challenging problem: General Partial Label
Learning (GPLL). Compared to the traditional Partial Label Learning (PLL)
problem, GPLL relaxes the supervision assumption from instance-level -- a label
set partially labels an instance -- to group-level: 1) a label set partially
labels a group of instances, where the within-group instance-label link
annotations are missing, and 2) cross-group links are allowed -- instances in a
group may be partially linked to the label set from another group. Such
ambiguous group-level supervision is more practical in real-world scenarios as
additional annotation on the instance-level is no longer required, e.g.,
face-naming in videos where the group consists of faces in a frame, labeled by
a name set in the corresponding caption. In this paper, we propose a novel
graph convolutional network (GCN) called Dual Bipartite Graph Autoencoder
(DB-GAE) to tackle the label ambiguity challenge of GPLL. First, we exploit the
cross-group correlations to represent the instance groups as dual bipartite
graphs: within-group and cross-group, which reciprocally complements each other
to resolve the linking ambiguities. Second, we design a GCN autoencoder to
encode and decode them, where the decodings are considered as the refined
results. It is worth noting that DB-GAE is self-supervised and transductive, as
it only uses the group-level supervision without a separate offline training
stage. Extensive experiments on two real-world datasets demonstrate that DB-GAE
significantly outperforms the best baseline over absolute 0.159 F1-score and
24.8% accuracy. We further offer analysis on various levels of label
ambiguities.
Related papers
- Multi-Level Label Correction by Distilling Proximate Patterns for Semi-supervised Semantic Segmentation [16.75278876840937]
We propose an algorithm called Multi-Level Label Correction (MLLC) to rectify erroneous pseudo-labels.
MLLC can significantly improve supervised baselines and outperforms state-of-the-art approaches in different scenarios on Cityscapes and PASCAL VOC 2012 datasets.
arXiv Detail & Related papers (2024-04-02T16:06:20Z) - Generalized Category Discovery with Clustering Assignment Consistency [56.92546133591019]
Generalized category discovery (GCD) is a recently proposed open-world task.
We propose a co-training-based framework that encourages clustering consistency.
Our method achieves state-of-the-art performance on three generic benchmarks and three fine-grained visual recognition datasets.
arXiv Detail & Related papers (2023-10-30T00:32:47Z) - Complementary Classifier Induced Partial Label Learning [54.61668156386079]
In partial label learning (PLL), each training sample is associated with a set of candidate labels, among which only one is valid.
In disambiguation, the existing works usually do not fully investigate the effectiveness of the non-candidate label set.
In this paper, we use the non-candidate labels to induce a complementary classifier, which naturally forms an adversarial relationship against the traditional classifier.
arXiv Detail & Related papers (2023-05-17T02:13:23Z) - G2NetPL: Generic Game-Theoretic Network for Partial-Label Image
Classification [14.82038002764209]
Multi-label image classification aims to predict all possible labels in an image.
Existing works on partial-label learning focus on the case where each training image is labeled with only a subset of its positive/negative labels.
This paper proposes an end-to-end Generic Game-theoretic Network (G2NetPL) for partial-label learning.
arXiv Detail & Related papers (2022-10-20T17:59:21Z) - Weakly Supervised Classification Using Group-Level Labels [12.285265254225166]
We propose methods to use group-level binary labels as weak supervision to train instance-level binary classification models.
We model group-level labels as Class Conditional Noisy (CCN) labels for individual instances and use the noisy labels to regularize predictions of the model trained on the strongly-labeled instances.
arXiv Detail & Related papers (2021-08-16T20:01:45Z) - Group-aware Label Transfer for Domain Adaptive Person Re-identification [179.816105255584]
Unsupervised Adaptive Domain (UDA) person re-identification (ReID) aims at adapting the model trained on a labeled source-domain dataset to a target-domain dataset without any further annotations.
Most successful UDA-ReID approaches combine clustering-based pseudo-label prediction with representation learning and perform the two steps in an alternating fashion.
We propose a Group-aware Label Transfer (GLT) algorithm, which enables the online interaction and mutual promotion of pseudo-label prediction and representation learning.
arXiv Detail & Related papers (2021-03-23T07:57:39Z) - SegGroup: Seg-Level Supervision for 3D Instance and Semantic
Segmentation [88.22349093672975]
We design a weakly supervised point cloud segmentation algorithm that only requires clicking on one point per instance to indicate its location for annotation.
With over-segmentation for pre-processing, we extend these location annotations into segments as seg-level labels.
We show that our seg-level supervised method (SegGroup) achieves comparable results with the fully annotated point-level supervised methods.
arXiv Detail & Related papers (2020-12-18T13:23:34Z) - An Empirical Study on Large-Scale Multi-Label Text Classification
Including Few and Zero-Shot Labels [49.036212158261215]
Large-scale Multi-label Text Classification (LMTC) has a wide range of Natural Language Processing (NLP) applications.
Current state-of-the-art LMTC models employ Label-Wise Attention Networks (LWANs)
We show that hierarchical methods based on Probabilistic Label Trees (PLTs) outperform LWANs.
We propose a new state-of-the-art method which combines BERT with LWANs.
arXiv Detail & Related papers (2020-10-04T18:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.