Network Cooperation with Progressive Disambiguation for Partial Label
Learning
- URL: http://arxiv.org/abs/2002.11919v1
- Date: Sat, 22 Feb 2020 09:50:39 GMT
- Title: Network Cooperation with Progressive Disambiguation for Partial Label
Learning
- Authors: Yao Yao, Chen Gong, Jiehui Deng, Jian Yang
- Abstract summary: Partial Label Learning (PLL) aims to train a classifier when each training instance is associated with a set of candidate labels, among which only one is correct but is not accessible during the training phase.
Existing methods ignore the disambiguation difficulty of instances and adopt the single-trend training mechanism.
This paper proposes a novel approach "Network Cooperation with Progressive Disambiguation" (NCPD)
By employing artificial neural networks as the backbone, we utilize a network cooperation mechanism which trains two networks collaboratively by letting them interact with each other.
- Score: 37.05637357091572
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Partial Label Learning (PLL) aims to train a classifier when each training
instance is associated with a set of candidate labels, among which only one is
correct but is not accessible during the training phase. The common strategy
dealing with such ambiguous labeling information is to disambiguate the
candidate label sets. Nonetheless, existing methods ignore the disambiguation
difficulty of instances and adopt the single-trend training mechanism. The
former would lead to the vulnerability of models to the false positive labels
and the latter may arouse error accumulation problem. To remedy these two
drawbacks, this paper proposes a novel approach termed "Network Cooperation
with Progressive Disambiguation" (NCPD) for PLL. Specifically, we devise a
progressive disambiguation strategy of which the disambiguation operations are
performed on simple instances firstly and then gradually on more complicated
ones. Therefore, the negative impacts brought by the false positive labels of
complicated instances can be effectively mitigated as the disambiguation
ability of the model has been strengthened via learning from the simple
instances. Moreover, by employing artificial neural networks as the backbone,
we utilize a network cooperation mechanism which trains two networks
collaboratively by letting them interact with each other. As two networks have
different disambiguation ability, such interaction is beneficial for both
networks to reduce their respective disambiguation errors, and thus is much
better than the existing algorithms with single-trend training process.
Extensive experimental results on various benchmark and practical datasets
demonstrate the superiority of our NCPD to other state-of-the-art PLL methods.
Related papers
- AsyCo: An Asymmetric Dual-task Co-training Model for Partial-label Learning [53.97072488455662]
Self-training models achieve state-of-the-art performance but suffer from error accumulation problem caused by mistakenly disambiguated instances.
We propose an asymmetric dual-task co-training model called AsyCo, which forces its two networks, i.e., a disambiguation network and an auxiliary network, to learn from different views explicitly.
Experiments on both uniform and instance-dependent partially labeled datasets demonstrate the effectiveness of AsyCo.
arXiv Detail & Related papers (2024-07-21T02:08:51Z) - Unsupervised Visible-Infrared Person ReID by Collaborative Learning with Neighbor-Guided Label Refinement [53.044703127757295]
Unsupervised learning visible-infrared person re-identification (USL-VI-ReID) aims at learning modality-invariant features from unlabeled cross-modality dataset.
We propose a Dual Optimal Transport Label Assignment (DOTLA) framework to simultaneously assign the generated labels from one modality to its counterpart modality.
The proposed DOTLA mechanism formulates a mutual reinforcement and efficient solution to cross-modality data association, which could effectively reduce the side-effects of some insufficient and noisy label associations.
arXiv Detail & Related papers (2023-05-22T04:40:30Z) - Dual Clustering Co-teaching with Consistent Sample Mining for
Unsupervised Person Re-Identification [13.65131691012468]
In unsupervised person Re-ID, peer-teaching strategy leveraging two networks to facilitate training has been proven to be an effective method to deal with the pseudo label noise.
This paper proposes a novel Dual Clustering Co-teaching (DCCT) approach to handle this issue.
DCCT mainly exploits the features extracted by two networks to generate two sets of pseudo labels separately by clustering with different parameters.
arXiv Detail & Related papers (2022-10-07T06:04:04Z) - Meta Objective Guided Disambiguation for Partial Label Learning [44.05801303440139]
We propose a novel framework for partial label learning with meta objective guided disambiguation (MoGD)
MoGD aims to recover the ground-truth label from candidate labels set by solving a meta objective on a small validation set.
The proposed method can be easily implemented by using various deep networks with the ordinary SGD.
arXiv Detail & Related papers (2022-08-26T06:48:01Z) - Learning from Data with Noisy Labels Using Temporal Self-Ensemble [11.245833546360386]
Deep neural networks (DNNs) have an enormous capacity to memorize noisy labels.
Current state-of-the-art methods present a co-training scheme that trains dual networks using samples associated with small losses.
We propose a simple yet effective robust training scheme that operates by training only a single network.
arXiv Detail & Related papers (2022-07-21T08:16:31Z) - Incremental Embedding Learning via Zero-Shot Translation [65.94349068508863]
Current state-of-the-art incremental learning methods tackle catastrophic forgetting problem in traditional classification networks.
We propose a novel class-incremental method for embedding network, named as zero-shot translation class-incremental method (ZSTCI)
In addition, ZSTCI can easily be combined with existing regularization-based incremental learning methods to further improve performance of embedding networks.
arXiv Detail & Related papers (2020-12-31T08:21:37Z) - Dual-Refinement: Joint Label and Feature Refinement for Unsupervised
Domain Adaptive Person Re-Identification [51.98150752331922]
Unsupervised domain adaptive (UDA) person re-identification (re-ID) is a challenging task due to the missing of labels for the target domain data.
We propose a novel approach, called Dual-Refinement, that jointly refines pseudo labels at the off-line clustering phase and features at the on-line training phase.
Our method outperforms the state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-12-26T07:35:35Z) - Two-phase Pseudo Label Densification for Self-training based Domain
Adaptation [93.03265290594278]
We propose a novel Two-phase Pseudo Label Densification framework, referred to as TPLD.
In the first phase, we use sliding window voting to propagate the confident predictions, utilizing intrinsic spatial-correlations in the images.
In the second phase, we perform a confidence-based easy-hard classification.
To ease the training process and avoid noisy predictions, we introduce the bootstrapping mechanism to the original self-training loss.
arXiv Detail & Related papers (2020-12-09T02:35:25Z) - Combating noisy labels by agreement: A joint training method with
co-regularization [27.578738673827658]
We propose a robust learning paradigm called JoCoR, which aims to reduce the diversity of two networks during training.
We show that JoCoR is superior to many state-of-the-art approaches for learning with noisy labels.
arXiv Detail & Related papers (2020-03-05T16:42:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.