A Semantic-aware Attention and Visual Shielding Network for
Cloth-changing Person Re-identification
- URL: http://arxiv.org/abs/2207.08387v2
- Date: Fri, 17 Nov 2023 08:50:15 GMT
- Title: A Semantic-aware Attention and Visual Shielding Network for
Cloth-changing Person Re-identification
- Authors: Zan Gao, Hongwei Wei, Weili Guan, Jie Nie, Meng Wang, Shenyong Chen
- Abstract summary: Cloth-changing person reidentification (ReID) is a newly emerging research topic that aims to retrieve pedestrians whose clothes are changed.
Since the human appearance with different clothes exhibits large variations, it is very difficult for existing approaches to extract discriminative and robust feature representations.
This work proposes a novel semantic-aware attention and visual shielding network for cloth-changing person ReID.
- Score: 29.026249268566303
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Cloth-changing person reidentification (ReID) is a newly emerging research
topic that aims to retrieve pedestrians whose clothes are changed. Since the
human appearance with different clothes exhibits large variations, it is very
difficult for existing approaches to extract discriminative and robust feature
representations. Current works mainly focus on body shape or contour sketches,
but the human semantic information and the potential consistency of pedestrian
features before and after changing clothes are not fully explored or are
ignored. To solve these issues, in this work, a novel semantic-aware attention
and visual shielding network for cloth-changing person ReID (abbreviated as
SAVS) is proposed where the key idea is to shield clues related to the
appearance of clothes and only focus on visual semantic information that is not
sensitive to view/posture changes. Specifically, a visual semantic encoder is
first employed to locate the human body and clothing regions based on human
semantic segmentation information. Then, a human semantic attention module
(HSA) is proposed to highlight the human semantic information and reweight the
visual feature map. In addition, a visual clothes shielding module (VCS) is
also designed to extract a more robust feature representation for the
cloth-changing task by covering the clothing regions and focusing the model on
the visual semantic information unrelated to the clothes. Most importantly,
these two modules are jointly explored in an end-to-end unified framework.
Extensive experiments demonstrate that the proposed method can significantly
outperform state-of-the-art methods, and more robust features can be extracted
for cloth-changing persons. Compared with FSAM (published in CVPR 2021), this
method can achieve improvements of 32.7% (16.5%) and 14.9% (-) on the LTCC and
PRCC datasets in terms of mAP (rank-1), respectively.
Related papers
- CLIP-Driven Cloth-Agnostic Feature Learning for Cloth-Changing Person Re-Identification [47.948622774810296]
We propose a novel framework called CLIP-Driven Cloth-Agnostic Feature Learning (CCAF) for Cloth-Changing Person Re-Identification (CC-ReID)
Two modules were custom-designed: the Invariant Feature Prompting (IFP) and the Clothes Feature Minimization (CFM)
Experiments have demonstrated the effectiveness of the proposed CCAF, achieving new state-of-the-art performance on several popular CC-ReID benchmarks without any additional inference time.
arXiv Detail & Related papers (2024-06-13T14:56:07Z) - Content and Salient Semantics Collaboration for Cloth-Changing Person Re-Identification [74.10897798660314]
Cloth-changing person Re-IDentification aims at recognizing the same person with clothing changes across non-overlapping cameras.
We propose the Content and Salient Semantics Collaboration framework, facilitating cross-parallel semantics interaction and refinement.
Our framework is simple yet effective, and the vital design is the Semantics Mining and Refinement (SMR) module.
arXiv Detail & Related papers (2024-05-26T15:17:28Z) - Identity-aware Dual-constraint Network for Cloth-Changing Person Re-identification [13.709863134725335]
Cloth-Changing Person Re-Identification (CC-ReID) aims to accurately identify the target person in more realistic surveillance scenarios, where pedestrians usually change their clothing.
Despite great progress, limited cloth-changing training samples in existing CC-ReID datasets still prevent the model from adequately learning cloth-irrelevant features.
We propose an Identity-aware Dual-constraint Network (IDNet) for the CC-ReID task.
arXiv Detail & Related papers (2024-03-13T05:46:36Z) - Semantic-aware Consistency Network for Cloth-changing Person
Re-Identification [8.885551377703944]
We present a Semantic-aware Consistency Network (SCNet) to learn identity-related semantic features.
We generate the black-clothing image by erasing pixels in the clothing area.
We further design a semantic consistency loss to facilitate the learning of high-level identity-related semantic features.
arXiv Detail & Related papers (2023-08-27T14:07:57Z) - Identity-Guided Collaborative Learning for Cloth-Changing Person
Reidentification [29.200286257496714]
We propose a novel identity-guided collaborative learning scheme (IGCL) for cloth-changing person ReID.
First, we design a novel clothing attention stream to reasonably reduce the interference caused by clothing information.
Second, we propose a human semantic attention and body jigsaw stream to highlight the human semantic information and simulate different poses of the same identity.
Third, a pedestrian identity enhancement stream is further proposed to enhance the identity importance and extract more favorable identity robust features.
arXiv Detail & Related papers (2023-04-10T06:05:54Z) - Body Part-Based Representation Learning for Occluded Person
Re-Identification [102.27216744301356]
Occluded person re-identification (ReID) is a person retrieval task which aims at matching occluded person images with holistic ones.
Part-based methods have been shown beneficial as they offer fine-grained information and are well suited to represent partially visible human bodies.
We propose BPBreID, a body part-based ReID model for solving the above issues.
arXiv Detail & Related papers (2022-11-07T16:48:41Z) - Multigranular Visual-Semantic Embedding for Cloth-Changing Person
Re-identification [38.7806002518266]
This work proposes a novel visual-semantic embedding algorithm (MVSE) for cloth-changing person ReID.
To fully represent a person with clothing changes, a multigranular feature representation scheme (MGR) is employed, and then a cloth desensitization network (CDN) is designed.
A partially semantically aligned network (PSA) is proposed to obtain the visual-semantic information that is used to align the human attributes.
arXiv Detail & Related papers (2021-08-10T09:14:44Z) - Semantic-guided Pixel Sampling for Cloth-Changing Person
Re-identification [80.70419698308064]
This paper proposes a semantic-guided pixel sampling approach for the cloth-changing person re-ID task.
We first recognize the pedestrian's upper clothes and pants, then randomly change them by sampling pixels from other pedestrians.
Our method achieved 65.8% on Rank1 accuracy, which outperforms previous methods with a large margin.
arXiv Detail & Related papers (2021-07-24T03:41:00Z) - Apparel-invariant Feature Learning for Apparel-changed Person
Re-identification [70.16040194572406]
Most public ReID datasets are collected in a short time window in which persons' appearance rarely changes.
In real-world applications such as in a shopping mall, the same person's clothing may change, and different persons may wearing similar clothes.
It is critical to learn an apparel-invariant person representation under cases like cloth changing or several persons wearing similar clothes.
arXiv Detail & Related papers (2020-08-14T03:49:14Z) - Long-Term Cloth-Changing Person Re-identification [154.57752691285046]
Person re-identification (Re-ID) aims to match a target person across camera views at different locations and times.
Existing Re-ID studies focus on the short-term cloth-consistent setting, under which a person re-appears in different camera views with the same outfit.
In this work, we focus on a much more difficult yet practical setting where person matching is conducted over long-duration, e.g., over days and months.
arXiv Detail & Related papers (2020-05-26T11:27:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.