Features Reconstruction Disentanglement Cloth-Changing Person Re-Identification
- URL: http://arxiv.org/abs/2407.10694v1
- Date: Mon, 15 Jul 2024 13:08:42 GMT
- Title: Features Reconstruction Disentanglement Cloth-Changing Person Re-Identification
- Authors: Zhihao Chen, Yiyuan Ge, Qing Yue,
- Abstract summary: Cloth-changing person re-identification (CC-ReID) aims to retrieve specific pedestrians in a cloth-changing scenario.
Main challenge is to disentangle the clothing-related and clothing-unrelated features.
We propose features reconstruction disentanglement ReID (FRD-ReID), which can controllably decouple the clothing-unrelated and clothing-related features.
- Score: 1.5703073293718952
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Cloth-changing person re-identification (CC-ReID) aims to retrieve specific pedestrians in a cloth-changing scenario. Its main challenge is to disentangle the clothing-related and clothing-unrelated features. Most existing approaches force the model to learn clothing-unrelated features by changing the color of the clothes. However, due to the lack of ground truth, these methods inevitably introduce noise, which destroys the discriminative features and leads to an uncontrollable disentanglement process. In this paper, we propose a new person re-identification network called features reconstruction disentanglement ReID (FRD-ReID), which can controllably decouple the clothing-unrelated and clothing-related features. Specifically, we first introduce the human parsing mask as the ground truth of the reconstruction process. At the same time, we propose the far away attention (FAA) mechanism and the person contour attention (PCA) mechanism for clothing-unrelated features and pedestrian contour features to improve the feature reconstruction efficiency. In the testing phase, we directly discard the clothing-related features for inference,which leads to a controllable disentanglement process. We conducted extensive experiments on the PRCC, LTCC, and Vc-Clothes datasets and demonstrated that our method outperforms existing state-of-the-art methods.
Related papers
- Discriminative Pedestrian Features and Gated Channel Attention for Clothes-Changing Person Re-Identification [8.289726210177532]
Clothes-Changing Person Re-Identification (CC-ReID) has become increasingly significant.
This paper proposes an innovative method for disentangled feature extraction, effectively extracting discriminative features from pedestrian images.
Experiments conducted on two standard CC-ReID datasets validate the effectiveness of the proposed approach.
arXiv Detail & Related papers (2024-10-29T02:12:46Z) - Content and Salient Semantics Collaboration for Cloth-Changing Person Re-Identification [74.10897798660314]
Cloth-changing person Re-IDentification aims at recognizing the same person with clothing changes across non-overlapping cameras.
We propose the Content and Salient Semantics Collaboration framework, facilitating cross-parallel semantics interaction and refinement.
Our framework is simple yet effective, and the vital design is the Semantics Mining and Refinement (SMR) module.
arXiv Detail & Related papers (2024-05-26T15:17:28Z) - Identity-aware Dual-constraint Network for Cloth-Changing Person Re-identification [13.709863134725335]
Cloth-Changing Person Re-Identification (CC-ReID) aims to accurately identify the target person in more realistic surveillance scenarios, where pedestrians usually change their clothing.
Despite great progress, limited cloth-changing training samples in existing CC-ReID datasets still prevent the model from adequately learning cloth-irrelevant features.
We propose an Identity-aware Dual-constraint Network (IDNet) for the CC-ReID task.
arXiv Detail & Related papers (2024-03-13T05:46:36Z) - Semantic-aware Consistency Network for Cloth-changing Person
Re-Identification [8.885551377703944]
We present a Semantic-aware Consistency Network (SCNet) to learn identity-related semantic features.
We generate the black-clothing image by erasing pixels in the clothing area.
We further design a semantic consistency loss to facilitate the learning of high-level identity-related semantic features.
arXiv Detail & Related papers (2023-08-27T14:07:57Z) - Clothes-Invariant Feature Learning by Causal Intervention for
Clothes-Changing Person Re-identification [118.23912884472794]
Clothes-invariant feature extraction is critical to the clothes-changing person re-identification (CC-ReID)
We argue that there exists a strong spurious correlation between clothes and human identity, that restricts the common likelihood-based ReID method P(Y|X) to extract clothes-irrelevant features.
We propose a new Causal Clothes-Invariant Learning (CCIL) method to achieve clothes-invariant feature learning.
arXiv Detail & Related papers (2023-05-10T13:48:24Z) - DIG: Draping Implicit Garment over the Human Body [56.68349332089129]
We propose an end-to-end differentiable pipeline that represents garments using implicit surfaces and learns a skinning field conditioned on shape and pose parameters of an articulated body model.
We show that our method, thanks to its end-to-end differentiability, allows to recover body and garments parameters jointly from image observations.
arXiv Detail & Related papers (2022-09-22T08:13:59Z) - A Semantic-aware Attention and Visual Shielding Network for
Cloth-changing Person Re-identification [29.026249268566303]
Cloth-changing person reidentification (ReID) is a newly emerging research topic that aims to retrieve pedestrians whose clothes are changed.
Since the human appearance with different clothes exhibits large variations, it is very difficult for existing approaches to extract discriminative and robust feature representations.
This work proposes a novel semantic-aware attention and visual shielding network for cloth-changing person ReID.
arXiv Detail & Related papers (2022-07-18T05:38:37Z) - Apparel-invariant Feature Learning for Apparel-changed Person
Re-identification [70.16040194572406]
Most public ReID datasets are collected in a short time window in which persons' appearance rarely changes.
In real-world applications such as in a shopping mall, the same person's clothing may change, and different persons may wearing similar clothes.
It is critical to learn an apparel-invariant person representation under cases like cloth changing or several persons wearing similar clothes.
arXiv Detail & Related papers (2020-08-14T03:49:14Z) - Long-Term Cloth-Changing Person Re-identification [154.57752691285046]
Person re-identification (Re-ID) aims to match a target person across camera views at different locations and times.
Existing Re-ID studies focus on the short-term cloth-consistent setting, under which a person re-appears in different camera views with the same outfit.
In this work, we focus on a much more difficult yet practical setting where person matching is conducted over long-duration, e.g., over days and months.
arXiv Detail & Related papers (2020-05-26T11:27:21Z) - Person Re-identification by Contour Sketch under Moderate Clothing
Change [95.83034113646657]
Person re-id, the process of matching pedestrian images across different camera views, is an important task in visual surveillance.
In this work, we call the person re-id under clothing change the "cross-clothes person re-id"
Due to the lack of a large-scale dataset for cross-clothes person re-id, we contribute a new dataset that consists of 33698 images from 221 identities.
arXiv Detail & Related papers (2020-02-06T15:13:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.