Joint Visual and Temporal Consistency for Unsupervised Domain Adaptive
Person Re-Identification
- URL: http://arxiv.org/abs/2007.10854v1
- Date: Tue, 21 Jul 2020 14:31:27 GMT
- Title: Joint Visual and Temporal Consistency for Unsupervised Domain Adaptive
Person Re-Identification
- Authors: Jianing Li, Shiliang Zhang
- Abstract summary: This paper jointly enforces visual and temporal consistency in the combination of a local one-hot classification and a global multi-class classification.
Experimental results on three large-scale ReID datasets demonstrate the superiority of proposed method in both unsupervised and unsupervised domain adaptive ReID tasks.
- Score: 64.37745443119942
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptive person Re-IDentification (ReID) is challenging
because of the large domain gap between source and target domains, as well as
the lackage of labeled data on the target domain. This paper tackles this
challenge through jointly enforcing visual and temporal consistency in the
combination of a local one-hot classification and a global multi-class
classification. The local one-hot classification assigns images in a training
batch with different person IDs, then adopts a Self-Adaptive Classification
(SAC) model to classify them. The global multi-class classification is achieved
by predicting labels on the entire unlabeled training set with the Memory-based
Temporal-guided Cluster (MTC). MTC predicts multi-class labels by considering
both visual similarity and temporal consistency to ensure the quality of label
prediction. The two classification models are combined in a unified framework,
which effectively leverages the unlabeled data for discriminative feature
learning. Experimental results on three large-scale ReID datasets demonstrate
the superiority of proposed method in both unsupervised and unsupervised domain
adaptive ReID tasks. For example, under unsupervised setting, our method
outperforms recent unsupervised domain adaptive methods, which leverage more
labels for training.
Related papers
- Fast One-Stage Unsupervised Domain Adaptive Person Search [17.164485293539833]
Unsupervised person search aims to localize a particular target person from a gallery set of scene images without annotations.
We propose a Fast One-stage Unsupervised person Search (FOUS) which integrates complementary domain adaptaion with label adaptaion.
FOUS can achieve the state-of-the-art (SOTA) performance on two benchmark datasets, CUHK-SYSU and PRW.
arXiv Detail & Related papers (2024-05-05T07:15:47Z) - Cycle Label-Consistent Networks for Unsupervised Domain Adaptation [57.29464116557734]
Domain adaptation aims to leverage a labeled source domain to learn a classifier for the unlabeled target domain with a different distribution.
We propose a simple yet efficient domain adaptation method, i.e. Cycle Label-Consistent Network (CLCN), by exploiting the cycle consistency of classification label.
We demonstrate the effectiveness of our approach on MNIST-USPS-SVHN, Office-31, Office-Home and Image CLEF-DA benchmarks.
arXiv Detail & Related papers (2022-05-27T13:09:08Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Dual-Refinement: Joint Label and Feature Refinement for Unsupervised
Domain Adaptive Person Re-Identification [51.98150752331922]
Unsupervised domain adaptive (UDA) person re-identification (re-ID) is a challenging task due to the missing of labels for the target domain data.
We propose a novel approach, called Dual-Refinement, that jointly refines pseudo labels at the off-line clustering phase and features at the on-line training phase.
Our method outperforms the state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-12-26T07:35:35Z) - Unsupervised Person Re-identification via Multi-label Classification [55.65870468861157]
This paper formulates unsupervised person ReID as a multi-label classification task to progressively seek true labels.
Our method starts by assigning each person image with a single-class label, then evolves to multi-label classification by leveraging the updated ReID model for label prediction.
To boost the ReID model training efficiency in multi-label classification, we propose the memory-based multi-label classification loss (MMCL)
arXiv Detail & Related papers (2020-04-20T12:13:43Z) - Mutual Mean-Teaching: Pseudo Label Refinery for Unsupervised Domain
Adaptation on Person Re-identification [56.97651712118167]
Person re-identification (re-ID) aims at identifying the same persons' images across different cameras.
domain diversities between different datasets pose an evident challenge for adapting the re-ID model trained on one dataset to another one.
We propose an unsupervised framework, Mutual Mean-Teaching (MMT), to learn better features from the target domain via off-line refined hard pseudo labels and on-line refined soft pseudo labels.
arXiv Detail & Related papers (2020-01-06T12:42:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.