Discriminative Pedestrian Features and Gated Channel Attention for Clothes-Changing Person Re-Identification
- URL: http://arxiv.org/abs/2410.21663v1
- Date: Tue, 29 Oct 2024 02:12:46 GMT
- Title: Discriminative Pedestrian Features and Gated Channel Attention for Clothes-Changing Person Re-Identification
- Authors: Yongkang Ding, Rui Mao, Hanyue Zhu, Anqi Wang, Liyan Zhang,
- Abstract summary: Clothes-Changing Person Re-Identification (CC-ReID) has become increasingly significant.
This paper proposes an innovative method for disentangled feature extraction, effectively extracting discriminative features from pedestrian images.
Experiments conducted on two standard CC-ReID datasets validate the effectiveness of the proposed approach.
- Score: 8.289726210177532
- License:
- Abstract: In public safety and social life, the task of Clothes-Changing Person Re-Identification (CC-ReID) has become increasingly significant. However, this task faces considerable challenges due to appearance changes caused by clothing alterations. Addressing this issue, this paper proposes an innovative method for disentangled feature extraction, effectively extracting discriminative features from pedestrian images that are invariant to clothing. This method leverages pedestrian parsing techniques to identify and retain features closely associated with individual identity while disregarding the variable nature of clothing attributes. Furthermore, this study introduces a gated channel attention mechanism, which, by adjusting the network's focus, aids the model in more effectively learning and emphasizing features critical for pedestrian identity recognition. Extensive experiments conducted on two standard CC-ReID datasets validate the effectiveness of the proposed approach, with performance surpassing current leading solutions. The Top-1 accuracy under clothing change scenarios on the PRCC and VC-Clothes datasets reached 64.8% and 83.7%, respectively.
Related papers
- Multiple Information Prompt Learning for Cloth-Changing Person Re-Identification [31.1934675493469]
We propose a novel multiple information prompt learning (MIPL) scheme for cloth-changing person ReID.
TheCIS module is designed to decouple clothing information from the original RGB image features.
The Bio-guided attention (BGA) module is proposed to increase the learning intensity of the model for key information.
arXiv Detail & Related papers (2024-11-01T03:08:10Z) - PartFormer: Awakening Latent Diverse Representation from Vision Transformer for Object Re-Identification [73.64560354556498]
Vision Transformer (ViT) tends to overfit on most distinct regions of training data, limiting its generalizability and attention to holistic object features.
We present PartFormer, an innovative adaptation of ViT designed to overcome the limitations in object Re-ID tasks.
Our framework significantly outperforms state-of-the-art by 2.4% mAP scores on the most challenging MSMT17 dataset.
arXiv Detail & Related papers (2024-08-29T16:31:05Z) - Features Reconstruction Disentanglement Cloth-Changing Person Re-Identification [1.5703073293718952]
Cloth-changing person re-identification (CC-ReID) aims to retrieve specific pedestrians in a cloth-changing scenario.
Main challenge is to disentangle the clothing-related and clothing-unrelated features.
We propose features reconstruction disentanglement ReID (FRD-ReID), which can controllably decouple the clothing-unrelated and clothing-related features.
arXiv Detail & Related papers (2024-07-15T13:08:42Z) - Content and Salient Semantics Collaboration for Cloth-Changing Person Re-Identification [74.10897798660314]
Cloth-changing person Re-IDentification aims at recognizing the same person with clothing changes across non-overlapping cameras.
We propose the Content and Salient Semantics Collaboration framework, facilitating cross-parallel semantics interaction and refinement.
Our framework is simple yet effective, and the vital design is the Semantics Mining and Refinement (SMR) module.
arXiv Detail & Related papers (2024-05-26T15:17:28Z) - Identity-aware Dual-constraint Network for Cloth-Changing Person Re-identification [13.709863134725335]
Cloth-Changing Person Re-Identification (CC-ReID) aims to accurately identify the target person in more realistic surveillance scenarios, where pedestrians usually change their clothing.
Despite great progress, limited cloth-changing training samples in existing CC-ReID datasets still prevent the model from adequately learning cloth-irrelevant features.
We propose an Identity-aware Dual-constraint Network (IDNet) for the CC-ReID task.
arXiv Detail & Related papers (2024-03-13T05:46:36Z) - Clothes-Invariant Feature Learning by Causal Intervention for
Clothes-Changing Person Re-identification [118.23912884472794]
Clothes-invariant feature extraction is critical to the clothes-changing person re-identification (CC-ReID)
We argue that there exists a strong spurious correlation between clothes and human identity, that restricts the common likelihood-based ReID method P(Y|X) to extract clothes-irrelevant features.
We propose a new Causal Clothes-Invariant Learning (CCIL) method to achieve clothes-invariant feature learning.
arXiv Detail & Related papers (2023-05-10T13:48:24Z) - On Exploring Pose Estimation as an Auxiliary Learning Task for
Visible-Infrared Person Re-identification [66.58450185833479]
In this paper, we exploit Pose Estimation as an auxiliary learning task to assist the VI-ReID task in an end-to-end framework.
By jointly training these two tasks in a mutually beneficial manner, our model learns higher quality modality-shared and ID-related features.
Experimental results on two benchmark VI-ReID datasets show that the proposed method consistently improves state-of-the-art methods by significant margins.
arXiv Detail & Related papers (2022-01-11T09:44:00Z) - Deep Collaborative Multi-Modal Learning for Unsupervised Kinship
Estimation [53.62256887837659]
Kinship verification is a long-standing research challenge in computer vision.
We propose a novel deep collaborative multi-modal learning (DCML) to integrate the underlying information presented in facial properties.
Our DCML method is always superior to some state-of-the-art kinship verification methods.
arXiv Detail & Related papers (2021-09-07T01:34:51Z) - Semantic-guided Pixel Sampling for Cloth-Changing Person
Re-identification [80.70419698308064]
This paper proposes a semantic-guided pixel sampling approach for the cloth-changing person re-ID task.
We first recognize the pedestrian's upper clothes and pants, then randomly change them by sampling pixels from other pedestrians.
Our method achieved 65.8% on Rank1 accuracy, which outperforms previous methods with a large margin.
arXiv Detail & Related papers (2021-07-24T03:41:00Z) - Person Re-identification based on Robust Features in Open-world [0.0]
We propose a low-cost and high-efficiency method to solve shortcomings of the existing re-ID research.
Our approach based on pose estimation model improved by group convolution to obtain the continuous key points of pedestrian.
Our method achieves Rank-1: 60.9%, Rank-5: 78.1%, and mAP: 49.2% on this dataset, which exceeds most existing state-of-art re-ID models.
arXiv Detail & Related papers (2021-02-22T06:49:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.