CCUP: A Controllable Synthetic Data Generation Pipeline for Pretraining Cloth-Changing Person Re-Identification Models
- URL: http://arxiv.org/abs/2410.13567v1
- Date: Thu, 17 Oct 2024 14:04:02 GMT
- Title: CCUP: A Controllable Synthetic Data Generation Pipeline for Pretraining Cloth-Changing Person Re-Identification Models
- Authors: Yujian Zhao, Chengru Wu, Yinong Xu, Xuanzheng Du, Ruiyu Li, Guanglin Niu,
- Abstract summary: Cloth-changing person re-identification (CC-ReID) is a critical and challenging research topic in computer vision.
Due to the high cost of constructing CC-ReID data, the existing data-driven models are hard to train efficiently on limited data.
We propose a low-cost and efficient pipeline for generating controllable and high-quality synthetic data.
- Score: 6.892813084970311
- License:
- Abstract: Cloth-changing person re-identification (CC-ReID), also known as Long-Term Person Re-Identification (LT-ReID) is a critical and challenging research topic in computer vision that has recently garnered significant attention. However, due to the high cost of constructing CC-ReID data, the existing data-driven models are hard to train efficiently on limited data, causing overfitting issue. To address this challenge, we propose a low-cost and efficient pipeline for generating controllable and high-quality synthetic data simulating the surveillance of real scenarios specific to the CC-ReID task. Particularly, we construct a new self-annotated CC-ReID dataset named Cloth-Changing Unreal Person (CCUP), containing 6,000 IDs, 1,179,976 images, 100 cameras, and 26.5 outfits per individual. Based on this large-scale dataset, we introduce an effective and scalable pretrain-finetune framework for enhancing the generalization capabilities of the traditional CC-ReID models. The extensive experiments demonstrate that two typical models namely TransReID and FIRe^2, when integrated into our framework, outperform other state-of-the-art models after pretraining on CCUP and finetuning on the benchmarks such as PRCC, VC-Clothes and NKUP. The CCUP is available at: https://github.com/yjzhao1019/CCUP.
Related papers
- DLCR: A Generative Data Expansion Framework via Diffusion for Clothes-Changing Person Re-ID [69.70281727931048]
We propose a novel data expansion framework to generate diverse images of individuals in varied attire.
We generate additional data for five benchmark CC-ReID datasets.
We obtain a large top-1 accuracy improvement of $11.3%$ by training CAL, a previous state of the art (SOTA) method, with DLCR-generated data.
arXiv Detail & Related papers (2024-11-11T18:28:33Z) - On Feature Decorrelation in Cloth-Changing Person Re-identification [32.27835236681253]
Cloth-changing person re-identification (CC-ReID) poses a significant challenge in computer vision.
Traditional methods to achieve this involve integrating multi-modality data or employing manually annotated clothing labels.
We introduce a novel regularization technique based on density ratio estimation.
arXiv Detail & Related papers (2024-10-07T22:25:37Z) - CCDM: Continuous Conditional Diffusion Models for Image Generation [22.70942688582302]
Continuous Conditional Generative Modeling (CCGM) aims to estimate the distribution of high-dimensional data, typically images, conditioned on scalar continuous variables.
Existing Conditional Adversarial Networks (CcGANs) were initially designed for this task, their adversarial training mechanism remains vulnerable to extremely sparse or imbalanced data.
To enhance the quality of generated images, a promising alternative is to replace CcGANs with Conditional Diffusion Models (CDMs)
arXiv Detail & Related papers (2024-05-06T15:10:19Z) - OC4-ReID: Occluded Cloth-Changing Person Re-Identification [8.054546048450414]
Occluded Cloth-Changing Person Re-Identification (OC4-ReID) is a new method for retrieving specific pedestrians when their clothing has changed.
OC4-ReID simultaneously addresses two challenges of clothing changes and occlusion.
Comprehensive experiments on the proposed datasets, as well as on two CC-ReID benchmark datasets demonstrate the superior performance of proposed method against other state-of-the-art methods.
arXiv Detail & Related papers (2024-03-13T14:08:45Z) - Contrastive Multiple Instance Learning for Weakly Supervised Person ReID [50.04900262181093]
We introduce Contrastive Multiple Instance Learning (CMIL), a novel framework tailored for more effective weakly supervised ReID.
CMIL distinguishes itself by requiring only a single model and no pseudo labels while leveraging contrastive losses.
We release the WL-MUDD dataset, an extension of the MUDD dataset featuring naturally occurring weak labels from the real-world application at PerformancePhoto.co.
arXiv Detail & Related papers (2024-02-12T14:48:31Z) - How Realistic Is Your Synthetic Data? Constraining Deep Generative
Models for Tabular Data [57.97035325253996]
We show how Constrained Deep Generative Models (C-DGMs) can be transformed into realistic synthetic data models.
C-DGMs are able to exploit the background knowledge expressed by the constraints to outperform their standard counterparts.
arXiv Detail & Related papers (2024-02-07T13:22:05Z) - Retrieval-Enhanced Contrastive Vision-Text Models [61.783728119255365]
We propose to equip vision-text models with the ability to refine their embedding with cross-modal retrieved information from a memory at inference time.
Remarkably, we show that this can be done with a light-weight, single-layer, fusion transformer on top of a frozen CLIP.
Our experiments validate that our retrieval-enhanced contrastive (RECO) training improves CLIP performance substantially on several challenging fine-grained tasks.
arXiv Detail & Related papers (2023-06-12T15:52:02Z) - The CLEAR Benchmark: Continual LEArning on Real-World Imagery [77.98377088698984]
Continual learning (CL) is widely regarded as crucial challenge for lifelong AI.
We introduce CLEAR, the first continual image classification benchmark dataset with a natural temporal evolution of visual concepts.
We find that a simple unsupervised pre-training step can already boost state-of-the-art CL algorithms.
arXiv Detail & Related papers (2022-01-17T09:09:09Z) - Unsupervised Pre-training for Person Re-identification [90.98552221699508]
We present a large scale unlabeled person re-identification (Re-ID) dataset "LUPerson"
We make the first attempt of performing unsupervised pre-training for improving the generalization ability of the learned person Re-ID feature representation.
arXiv Detail & Related papers (2020-12-07T14:48:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.