Group-aware Label Transfer for Domain Adaptive Person Re-identification
- URL: http://arxiv.org/abs/2103.12366v1
- Date: Tue, 23 Mar 2021 07:57:39 GMT
- Title: Group-aware Label Transfer for Domain Adaptive Person Re-identification
- Authors: Kecheng Zheng, Wu Liu, Lingxiao He, Tao Mei, Jiebo Luo, Zheng-Jun Zha
- Abstract summary: Unsupervised Adaptive Domain (UDA) person re-identification (ReID) aims at adapting the model trained on a labeled source-domain dataset to a target-domain dataset without any further annotations.
Most successful UDA-ReID approaches combine clustering-based pseudo-label prediction with representation learning and perform the two steps in an alternating fashion.
We propose a Group-aware Label Transfer (GLT) algorithm, which enables the online interaction and mutual promotion of pseudo-label prediction and representation learning.
- Score: 179.816105255584
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised Domain Adaptive (UDA) person re-identification (ReID) aims at
adapting the model trained on a labeled source-domain dataset to a
target-domain dataset without any further annotations. Most successful UDA-ReID
approaches combine clustering-based pseudo-label prediction with representation
learning and perform the two steps in an alternating fashion. However, offline
interaction between these two steps may allow noisy pseudo labels to
substantially hinder the capability of the model. In this paper, we propose a
Group-aware Label Transfer (GLT) algorithm, which enables the online
interaction and mutual promotion of pseudo-label prediction and representation
learning. Specifically, a label transfer algorithm simultaneously uses pseudo
labels to train the data while refining the pseudo labels as an online
clustering algorithm. It treats the online label refinery problem as an optimal
transport problem, which explores the minimum cost for assigning M samples to N
pseudo labels. More importantly, we introduce a group-aware strategy to assign
implicit attribute group IDs to samples. The combination of the online label
refining algorithm and the group-aware strategy can better correct the noisy
pseudo label in an online fashion and narrow down the search space of the
target identity. The effectiveness of the proposed GLT is demonstrated by the
experimental results (Rank-1 accuracy) for Market1501$\to$DukeMTMC (82.0\%) and
DukeMTMC$\to$Market1501 (92.2\%), remarkably closing the gap between
unsupervised and supervised performance on person re-identification.
Related papers
- Plug-and-Play Pseudo Label Correction Network for Unsupervised Person
Re-identification [36.3733132520186]
We propose a graph-based pseudo label correction network (GLC) to refine the pseudo labels in the manner of supervised clustering.
GLC learns to rectify the initial noisy labels by means of the relationship constraints between samples on the k Nearest Neighbor graph.
Our method is widely compatible with various clustering-based methods and promotes the state-of-the-art performance consistently.
arXiv Detail & Related papers (2022-06-14T05:59:37Z) - Refining Pseudo Labels with Clustering Consensus over Generations for
Unsupervised Object Re-identification [84.72303377833732]
Unsupervised object re-identification targets at learning discriminative representations for object retrieval without any annotations.
We propose to estimate pseudo label similarities between consecutive training generations with clustering consensus and refine pseudo labels with temporally propagated and ensembled pseudo labels.
The proposed pseudo label refinery strategy is simple yet effective and can be seamlessly integrated into existing clustering-based unsupervised re-identification methods.
arXiv Detail & Related papers (2021-06-11T02:42:42Z) - Dual-Refinement: Joint Label and Feature Refinement for Unsupervised
Domain Adaptive Person Re-Identification [51.98150752331922]
Unsupervised domain adaptive (UDA) person re-identification (re-ID) is a challenging task due to the missing of labels for the target domain data.
We propose a novel approach, called Dual-Refinement, that jointly refines pseudo labels at the off-line clustering phase and features at the on-line training phase.
Our method outperforms the state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-12-26T07:35:35Z) - Joint Visual and Temporal Consistency for Unsupervised Domain Adaptive
Person Re-Identification [64.37745443119942]
This paper jointly enforces visual and temporal consistency in the combination of a local one-hot classification and a global multi-class classification.
Experimental results on three large-scale ReID datasets demonstrate the superiority of proposed method in both unsupervised and unsupervised domain adaptive ReID tasks.
arXiv Detail & Related papers (2020-07-21T14:31:27Z) - Unsupervised Person Re-identification via Multi-label Classification [55.65870468861157]
This paper formulates unsupervised person ReID as a multi-label classification task to progressively seek true labels.
Our method starts by assigning each person image with a single-class label, then evolves to multi-label classification by leveraging the updated ReID model for label prediction.
To boost the ReID model training efficiency in multi-label classification, we propose the memory-based multi-label classification loss (MMCL)
arXiv Detail & Related papers (2020-04-20T12:13:43Z) - Mutual Mean-Teaching: Pseudo Label Refinery for Unsupervised Domain
Adaptation on Person Re-identification [56.97651712118167]
Person re-identification (re-ID) aims at identifying the same persons' images across different cameras.
domain diversities between different datasets pose an evident challenge for adapting the re-ID model trained on one dataset to another one.
We propose an unsupervised framework, Mutual Mean-Teaching (MMT), to learn better features from the target domain via off-line refined hard pseudo labels and on-line refined soft pseudo labels.
arXiv Detail & Related papers (2020-01-06T12:42:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.