DIFFER: Disentangling Identity Features via Semantic Cues for Clothes-Changing Person Re-ID
- URL: http://arxiv.org/abs/2503.22912v1
- Date: Fri, 28 Mar 2025 23:40:59 GMT
- Title: DIFFER: Disentangling Identity Features via Semantic Cues for Clothes-Changing Person Re-ID
- Authors: Xin Liang, Yogesh S Rawat,
- Abstract summary: Clothes-changing person re-identification (CC-ReID) aims to recognize individuals under different clothing scenarios.<n>Current CC-ReID approaches either concentrate on modeling body shape using additional modalities including silhouette, pose, and body mesh.<n>We propose DIFFER: Disentangle Identity Features From Entangled Representations, a novel adversarial learning method that leverages textual descriptions to disentangle identity features.
- Score: 15.204652332980672
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Clothes-changing person re-identification (CC-ReID) aims to recognize individuals under different clothing scenarios. Current CC-ReID approaches either concentrate on modeling body shape using additional modalities including silhouette, pose, and body mesh, potentially causing the model to overlook other critical biometric traits such as gender, age, and style, or they incorporate supervision through additional labels that the model tries to disregard or emphasize, such as clothing or personal attributes. However, these annotations are discrete in nature and do not capture comprehensive descriptions. In this work, we propose DIFFER: Disentangle Identity Features From Entangled Representations, a novel adversarial learning method that leverages textual descriptions to disentangle identity features. Recognizing that image features inherently mix inseparable information, DIFFER introduces NBDetach, a mechanism designed for feature disentanglement by leveraging the separable nature of text descriptions as supervision. It partitions the feature space into distinct subspaces and, through gradient reversal layers, effectively separates identity-related features from non-biometric features. We evaluate DIFFER on 4 different benchmark datasets (LTCC, PRCC, CelebreID-Light, and CCVID) to demonstrate its effectiveness and provide state-of-the-art performance across all the benchmarks. DIFFER consistently outperforms the baseline method, with improvements in top-1 accuracy of 3.6% on LTCC, 3.4% on PRCC, 2.5% on CelebReID-Light, and 1% on CCVID. Our code can be found here.
Related papers
- See What You Seek: Semantic Contextual Integration for Cloth-Changing Person Re-Identification [16.845045499676793]
Cloth-changing person re-identification (CC-ReID) aims to match individuals across multiple surveillance cameras despite variations in clothing.<n>Existing methods typically focus on mitigating the effects of clothing changes or enhancing ID-relevant features.<n>We propose a novel prompt learning framework, Semantic Contextual Integration (SCI), for CC-ReID.
arXiv Detail & Related papers (2024-12-02T10:11:16Z) - Disentangled Representations for Short-Term and Long-Term Person Re-Identification [33.76874948187976]
We propose a new generative adversarial network, dubbed identity shuffle GAN (IS-GAN)
It disentangles identity-related and unrelated features from person images through an identity-shuffling technique.
Experimental results validate the effectiveness of IS-GAN, showing state-of-the-art performance on standard reID benchmarks.
arXiv Detail & Related papers (2024-09-09T02:09:49Z) - Content and Salient Semantics Collaboration for Cloth-Changing Person Re-Identification [74.10897798660314]
Cloth-changing person re-identification aims at recognizing the same person with clothing changes across non-overlapping cameras.<n>We propose a unified Semantics Mining and Refinement (SMR) module to extract robust identity-related content and salient semantics, mitigating interference from clothing appearances effectively.<n>Our proposed method achieves state-of-the-art performance on three cloth-changing benchmarks, demonstrating its superiority over advanced competitors.
arXiv Detail & Related papers (2024-05-26T15:17:28Z) - Identity-aware Dual-constraint Network for Cloth-Changing Person Re-identification [13.709863134725335]
Cloth-Changing Person Re-Identification (CC-ReID) aims to accurately identify the target person in more realistic surveillance scenarios, where pedestrians usually change their clothing.
Despite great progress, limited cloth-changing training samples in existing CC-ReID datasets still prevent the model from adequately learning cloth-irrelevant features.
We propose an Identity-aware Dual-constraint Network (IDNet) for the CC-ReID task.
arXiv Detail & Related papers (2024-03-13T05:46:36Z) - Masked Attribute Description Embedding for Cloth-Changing Person Re-identification [66.53045140286987]
Cloth-changing person re-identification (CC-ReID) aims to match persons who change clothes over long periods.
The key challenge in CC-ReID is to extract clothing-independent features, such as face, hairstyle, body shape, and gait.
We propose a Masked Attribute Description Embedding (MADE) method that unifies personal visual appearance and attribute description for CC-ReID.
arXiv Detail & Related papers (2024-01-11T03:47:13Z) - Exploring Fine-Grained Representation and Recomposition for Cloth-Changing Person Re-Identification [78.52704557647438]
We propose a novel FIne-grained Representation and Recomposition (FIRe$2$) framework to tackle both limitations without any auxiliary annotation or data.
Experiments demonstrate that FIRe$2$ can achieve state-of-the-art performance on five widely-used cloth-changing person Re-ID benchmarks.
arXiv Detail & Related papers (2023-08-21T12:59:48Z) - Identity-Aware Semi-Supervised Learning for Comic Character
Re-Identification [2.4624325014867763]
We introduce a robust framework that combines metric learning with a novel 'Identity-Aware' self-supervision method.
Our approach involves processing both facial and bodily features within a unified network architecture.
By extensively validating our method using in-series and inter-series evaluation metrics, we demonstrate its effectiveness in consistently re-identifying comic characters.
arXiv Detail & Related papers (2023-08-17T16:48:41Z) - Clothes-Invariant Feature Learning by Causal Intervention for
Clothes-Changing Person Re-identification [118.23912884472794]
Clothes-invariant feature extraction is critical to the clothes-changing person re-identification (CC-ReID)
We argue that there exists a strong spurious correlation between clothes and human identity, that restricts the common likelihood-based ReID method P(Y|X) to extract clothes-irrelevant features.
We propose a new Causal Clothes-Invariant Learning (CCIL) method to achieve clothes-invariant feature learning.
arXiv Detail & Related papers (2023-05-10T13:48:24Z) - FaceDancer: Pose- and Occlusion-Aware High Fidelity Face Swapping [62.38898610210771]
We present a new single-stage method for subject face swapping and identity transfer, named FaceDancer.
We have two major contributions: Adaptive Feature Fusion Attention (AFFA) and Interpreted Feature Similarity Regularization (IFSR)
arXiv Detail & Related papers (2022-10-19T11:31:38Z) - Deep Collaborative Multi-Modal Learning for Unsupervised Kinship
Estimation [53.62256887837659]
Kinship verification is a long-standing research challenge in computer vision.
We propose a novel deep collaborative multi-modal learning (DCML) to integrate the underlying information presented in facial properties.
Our DCML method is always superior to some state-of-the-art kinship verification methods.
arXiv Detail & Related papers (2021-09-07T01:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.