Unsupervised Domain Adaptation in Person re-ID via k-Reciprocal
Clustering and Large-Scale Heterogeneous Environment Synthesis
- URL: http://arxiv.org/abs/2001.04928v1
- Date: Tue, 14 Jan 2020 17:43:52 GMT
- Title: Unsupervised Domain Adaptation in Person re-ID via k-Reciprocal
Clustering and Large-Scale Heterogeneous Environment Synthesis
- Authors: Devinder Kumar, Parthipan Siva, Paul Marchwica and Alexander Wong
- Abstract summary: We introduce an unsupervised domain adaptation approach for person re-identification.
Experimental results show that the proposed ktCUDA and SHRED approach achieves an average improvement of +5.7 mAP in re-identification performance.
- Score: 76.46004354572956
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An ongoing major challenge in computer vision is the task of person
re-identification, where the goal is to match individuals across different,
non-overlapping camera views. While recent success has been achieved via
supervised learning using deep neural networks, such methods have limited
widespread adoption due to the need for large-scale, customized data
annotation. As such, there has been a recent focus on unsupervised learning
approaches to mitigate the data annotation issue; however, current approaches
in literature have limited performance compared to supervised learning
approaches as well as limited applicability for adoption in new environments.
In this paper, we address the aforementioned challenges faced in person
re-identification for real-world, practical scenarios by introducing a novel,
unsupervised domain adaptation approach for person re-identification. This is
accomplished through the introduction of: i) k-reciprocal tracklet Clustering
for Unsupervised Domain Adaptation (ktCUDA) (for pseudo-label generation on
target domain), and ii) Synthesized Heterogeneous RE-id Domain (SHRED) composed
of large-scale heterogeneous independent source environments (for improving
robustness and adaptability to a wide diversity of target environments).
Experimental results across four different image and video benchmark datasets
show that the proposed ktCUDA and SHRED approach achieves an average
improvement of +5.7 mAP in re-identification performance when compared to
existing state-of-the-art methods, as well as demonstrate better adaptability
to different types of environments.
Related papers
- Unified Domain Generalization and Adaptation for Multi-View 3D Object Detection [14.837853049121687]
3D object detection leveraging multi-view cameras has demonstrated their practical and economical value in challenging vision tasks.
Typical supervised learning approaches face challenges in achieving satisfactory adaptation toward unseen and unlabeled target datasets.
We propose Unified Domain Generalization and Adaptation (UDGA), a practical solution to mitigate those drawbacks.
arXiv Detail & Related papers (2024-10-29T18:51:49Z) - Robust Unsupervised Domain Adaptation by Retaining Confident Entropy via
Edge Concatenation [7.953644697658355]
Unsupervised domain adaptation can mitigate the need for extensive pixel-level annotations to train semantic segmentation networks.
We introduce a novel approach to domain adaptation, leveraging the synergy of internal and external information within entropy-based adversarial networks.
We devised a probability-sharing network that integrates diverse information for more effective segmentation.
arXiv Detail & Related papers (2023-10-11T02:50:16Z) - Unsupervised domain-adaptive person re-identification with multi-camera
constraints [0.0]
We propose an environment-constrained adaptive network for reducing the domain gap.
The proposed method incorporates person-pair information without person identity labels obtained from the environment into the model training.
We develop a method that appropriately selects a person from the pair that contributes to the performance improvement.
arXiv Detail & Related papers (2022-10-25T13:12:28Z) - Deep face recognition with clustering based domain adaptation [57.29464116557734]
We propose a new clustering-based domain adaptation method designed for face recognition task in which the source and target domain do not share any classes.
Our method effectively learns the discriminative target feature by aligning the feature domain globally, and, at the meantime, distinguishing the target clusters locally.
arXiv Detail & Related papers (2022-05-27T12:29:11Z) - Feature Diversity Learning with Sample Dropout for Unsupervised Domain
Adaptive Person Re-identification [0.0]
This paper proposes a new approach to learn the feature representation with better generalization ability through limiting noisy pseudo labels.
We put forward a brand-new method referred as to Feature Diversity Learning (FDL) under the classic mutual-teaching architecture.
Experimental results show that our proposed FDL-SD achieves the state-of-the-art performance on multiple benchmark datasets.
arXiv Detail & Related papers (2022-01-25T10:10:48Z) - Lifelong Unsupervised Domain Adaptive Person Re-identification with
Coordinated Anti-forgetting and Adaptation [127.6168183074427]
We propose a new task, Lifelong Unsupervised Domain Adaptive (LUDA) person ReID.
This is challenging because it requires the model to continuously adapt to unlabeled data of the target environments.
We design an effective scheme for this task, dubbed CLUDA-ReID, where the anti-forgetting is harmoniously coordinated with the adaptation.
arXiv Detail & Related papers (2021-12-13T13:19:45Z) - Unsupervised and self-adaptative techniques for cross-domain person
re-identification [82.54691433502335]
Person Re-Identification (ReID) across non-overlapping cameras is a challenging task.
Unsupervised Domain Adaptation (UDA) is a promising alternative, as it performs feature-learning adaptation from a model trained on a source to a target domain without identity-label annotation.
In this paper, we propose a novel UDA-based ReID method that takes advantage of triplets of samples created by a new offline strategy.
arXiv Detail & Related papers (2021-03-21T23:58:39Z) - Multi-Domain Adversarial Feature Generalization for Person
Re-Identification [52.835955258959785]
We propose a multi-dataset feature generalization network (MMFA-AAE)
It is capable of learning a universal domain-invariant feature representation from multiple labeled datasets and generalizing it to unseen' camera systems.
It also surpasses many state-of-the-art supervised methods and unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2020-11-25T08:03:15Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.