CCPA: Long-term Person Re-Identification via Contrastive Clothing and
Pose Augmentation
- URL: http://arxiv.org/abs/2402.14454v1
- Date: Thu, 22 Feb 2024 11:16:34 GMT
- Title: CCPA: Long-term Person Re-Identification via Contrastive Clothing and
Pose Augmentation
- Authors: Vuong D. Nguyen and Shishir K. Shah
- Abstract summary: Long-term Person Re-Identification aims at matching an individual across cameras after a long period of time.
We propose CCPA: Contrastive Clothing and Pose Augmentation framework for LRe-ID.
- Score: 2.1756081703276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Long-term Person Re-Identification (LRe-ID) aims at matching an individual
across cameras after a long period of time, presenting variations in clothing,
pose, and viewpoint. In this work, we propose CCPA: Contrastive Clothing and
Pose Augmentation framework for LRe-ID. Beyond appearance, CCPA captures body
shape information which is cloth-invariant using a Relation Graph Attention
Network. Training a robust LRe-ID model requires a wide range of clothing
variations and expensive cloth labeling, which is lacked in current LRe-ID
datasets. To address this, we perform clothing and pose transfer across
identities to generate images of more clothing variations and of different
persons wearing similar clothing. The augmented batch of images serve as inputs
to our proposed Fine-grained Contrastive Losses, which not only supervise the
Re-ID model to learn discriminative person embeddings under long-term scenarios
but also ensure in-distribution data generation. Results on LRe-ID datasets
demonstrate the effectiveness of our CCPA framework.
Related papers
- DLCR: A Generative Data Expansion Framework via Diffusion for Clothes-Changing Person Re-ID [69.70281727931048]
We propose a novel data expansion framework to generate diverse images of individuals in varied attire.
We generate additional data for five benchmark CC-ReID datasets.
We obtain a large top-1 accuracy improvement of $11.3%$ by training CAL, a previous state of the art (SOTA) method, with DLCR-generated data.
arXiv Detail & Related papers (2024-11-11T18:28:33Z) - Synthesizing Efficient Data with Diffusion Models for Person Re-Identification Pre-Training [51.87027943520492]
We present a novel paradigm Diffusion-ReID to efficiently augment and generate diverse images based on known identities.
Benefiting from our proposed paradigm, we first create a new large-scale person Re-ID dataset Diff-Person, which consists of over 777K images from 5,183 identities.
arXiv Detail & Related papers (2024-06-10T06:26:03Z) - Semantic-aware Consistency Network for Cloth-changing Person
Re-Identification [8.885551377703944]
We present a Semantic-aware Consistency Network (SCNet) to learn identity-related semantic features.
We generate the black-clothing image by erasing pixels in the clothing area.
We further design a semantic consistency loss to facilitate the learning of high-level identity-related semantic features.
arXiv Detail & Related papers (2023-08-27T14:07:57Z) - Unsupervised Long-Term Person Re-Identification with Clothes Change [46.54514001691254]
We investigate unsupervised person re-identification (Re-ID) with clothes change.
Most existing re-id methods artificially assume the clothes of every single person to be stationary across space and time.
We introduce a novel Curriculum Person Clustering (CPC) method that can adaptively regulate the unsupervised clustering criterion.
arXiv Detail & Related papers (2022-02-07T11:55:23Z) - Cloth-Changing Person Re-identification from A Single Image with Gait
Prediction and Regularization [65.50321170655225]
We introduce Gait recognition as an auxiliary task to drive the Image ReID model to learn cloth-agnostic representations.
Experiments on image-based Cloth-Changing ReID benchmarks, e.g., LTCC, PRCC, Real28, and VC-Clothes, demonstrate that GI-ReID performs favorably against the state-of-the-arts.
arXiv Detail & Related papers (2021-03-29T12:10:50Z) - Apparel-invariant Feature Learning for Apparel-changed Person
Re-identification [70.16040194572406]
Most public ReID datasets are collected in a short time window in which persons' appearance rarely changes.
In real-world applications such as in a shopping mall, the same person's clothing may change, and different persons may wearing similar clothes.
It is critical to learn an apparel-invariant person representation under cases like cloth changing or several persons wearing similar clothes.
arXiv Detail & Related papers (2020-08-14T03:49:14Z) - Long-Term Cloth-Changing Person Re-identification [154.57752691285046]
Person re-identification (Re-ID) aims to match a target person across camera views at different locations and times.
Existing Re-ID studies focus on the short-term cloth-consistent setting, under which a person re-appears in different camera views with the same outfit.
In this work, we focus on a much more difficult yet practical setting where person matching is conducted over long-duration, e.g., over days and months.
arXiv Detail & Related papers (2020-05-26T11:27:21Z) - Learning Shape Representations for Clothing Variations in Person
Re-Identification [34.559050607889816]
Person re-identification (re-ID) aims to recognize instances of the same person contained in multiple images taken across different cameras.
We propose a novel representation learning model which is able to generate a body shape feature representation without being affected by clothing color or patterns.
Case-Net learns a representation of identity that depends only on body shape via adversarial learning and feature disentanglement.
arXiv Detail & Related papers (2020-03-16T17:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.