Synthetic-To-Real Video Person Re-ID
- URL: http://arxiv.org/abs/2402.02108v3
- Date: Tue, 04 Feb 2025 06:00:12 GMT
- Title: Synthetic-To-Real Video Person Re-ID
- Authors: Xiangqun Zhang, Wei Feng, Ruize Han, Likai Wang, Linqi Song, Junhui Hou,
- Abstract summary: Person re-identification (Re-ID) is an important task and has significant applications for public security and information forensics.
We investigate a novel and challenging setting of Re-ID, i.e., cross-domain video-based person Re-ID.
We utilize synthetic video datasets as the source domain for training and real-world videos for testing.
- Score: 57.937189569211505
- License:
- Abstract: Person re-identification (Re-ID) is an important task and has significant applications for public security and information forensics, which has progressed rapidly with the development of deep learning. In this work, we investigate a novel and challenging setting of Re-ID, i.e., cross-domain video-based person Re-ID. Specifically, we utilize synthetic video datasets as the source domain for training and real-world videos for testing, notably reducing the reliance on expensive real data acquisition and annotation. To harness the potential of synthetic data, we first propose a self-supervised domain-invariant feature learning strategy for both static and dynamic (temporal) features. Additionally, to enhance person identification accuracy in the target domain, we propose a mean-teacher scheme incorporating a self-supervised ID consistency loss. Experimental results across five real datasets validate the rationale behind cross-synthetic-real domain adaptation and demonstrate the efficacy of our method. Notably, the discovery that synthetic data outperforms real data in the cross-domain scenario is a surprising outcome. The code and data are publicly available at https://github.com/XiangqunZhang/UDA_Video_ReID.
Related papers
- Alice Benchmarks: Connecting Real World Re-Identification with the
Synthetic [92.02220105679713]
We introduce the Alice benchmarks, large-scale datasets providing benchmarks and evaluation protocols to the research community.
Within the Alice benchmarks, two object re-ID tasks are offered: person and vehicle re-ID.
As an important feature of our real target, the clusterability of its training set is not manually guaranteed to make it closer to a real domain adaptation test scenario.
arXiv Detail & Related papers (2023-10-06T17:58:26Z) - Synthetic-to-Real Domain Adaptation for Action Recognition: A Dataset and Baseline Performances [76.34037366117234]
We introduce a new dataset called Robot Control Gestures (RoCoG-v2)
The dataset is composed of both real and synthetic videos from seven gesture classes.
We present results using state-of-the-art action recognition and domain adaptation algorithms.
arXiv Detail & Related papers (2023-03-17T23:23:55Z) - A New Benchmark: On the Utility of Synthetic Data with Blender for Bare
Supervised Learning and Downstream Domain Adaptation [42.2398858786125]
Deep learning in computer vision has achieved great success with the price of large-scale labeled training data.
The uncontrollable data collection process produces non-IID training and test data, where undesired duplication may exist.
To circumvent them, an alternative is to generate synthetic data via 3D rendering with domain randomization.
arXiv Detail & Related papers (2023-03-16T09:03:52Z) - Less is More: Learning from Synthetic Data with Fine-grained Attributes
for Person Re-Identification [16.107661617441327]
Person re-identification (re-ID) plays an important role in applications such as public security and video surveillance.
Recently, learning from synthetic data has attracted attention from both academia and the public eye.
We construct and label a large-scale synthetic person dataset named FineGPR with fine-grained attribute distribution.
arXiv Detail & Related papers (2021-09-22T03:12:32Z) - Unsupervised Domain Adaptive Learning via Synthetic Data for Person
Re-identification [101.1886788396803]
Person re-identification (re-ID) has gained more and more attention due to its widespread applications in video surveillance.
Unfortunately, the mainstream deep learning methods still need a large quantity of labeled data to train models.
In this paper, we develop a data collector to automatically generate synthetic re-ID samples in a computer game, and construct a data labeler to simultaneously annotate them.
arXiv Detail & Related papers (2021-09-12T15:51:41Z) - Semi-Supervised Domain Generalizable Person Re-Identification [74.75528879336576]
Existing person re-identification (re-id) methods are stuck when deployed to a new unseen scenario.
Recent efforts have been devoted to domain adaptive person re-id where extensive unlabeled data in the new scenario are utilized in a transductive learning manner.
We aim to explore multiple labeled datasets to learn generalized domain-invariant representations for person re-id.
arXiv Detail & Related papers (2021-08-11T06:08:25Z) - Attention-based Adversarial Appearance Learning of Augmented Pedestrians [49.25430012369125]
We propose a method to synthesize realistic data for the pedestrian recognition task.
Our approach utilizes an attention mechanism driven by an adversarial loss to learn domain discrepancies.
Our experiments confirm that the proposed adaptation method is robust to such discrepancies and reveals both visual realism and semantic consistency.
arXiv Detail & Related papers (2021-07-06T15:27:00Z) - Taking A Closer Look at Synthesis: Fine-grained Attribute Analysis for
Person Re-Identification [15.388939933009668]
Person re-identification (re-ID) plays an important role in applications such as public security and video surveillance.
Recently, learning from synthetic data, which benefits from the popularity of synthetic data engine, has achieved remarkable performance.
This research helps us have a deeper understanding of the fundamental problems in person re-ID, which also provides useful insights for dataset building and future practical usage.
arXiv Detail & Related papers (2020-10-15T02:47:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.