Content and Salient Semantics Collaboration for Cloth-Changing Person Re-Identification
- URL: http://arxiv.org/abs/2405.16597v1
- Date: Sun, 26 May 2024 15:17:28 GMT
- Title: Content and Salient Semantics Collaboration for Cloth-Changing Person Re-Identification
- Authors: Qizao Wang, Xuelin Qian, Bin Li, Lifeng Chen, Yanwei Fu, Xiangyang Xue,
- Abstract summary: Cloth-changing person Re-IDentification aims at recognizing the same person with clothing changes across non-overlapping cameras.
We propose the Content and Salient Semantics Collaboration framework, facilitating cross-parallel semantics interaction and refinement.
Our framework is simple yet effective, and the vital design is the Semantics Mining and Refinement (SMR) module.
- Score: 74.10897798660314
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cloth-changing person Re-IDentification (Re-ID) aims at recognizing the same person with clothing changes across non-overlapping cameras. Conventional person Re-ID methods usually bias the model's focus on cloth-related appearance features rather than identity-sensitive features associated with biological traits. Recently, advanced cloth-changing person Re-ID methods either resort to identity-related auxiliary modalities (e.g., sketches, silhouettes, keypoints and 3D shapes) or clothing labels to mitigate the impact of clothes. However, relying on unpractical and inflexible auxiliary modalities or annotations limits their real-world applicability. In this paper, we promote cloth-changing person Re-ID by effectively leveraging abundant semantics present within pedestrian images without the need for any auxiliaries. Specifically, we propose the Content and Salient Semantics Collaboration (CSSC) framework, facilitating cross-parallel semantics interaction and refinement. Our framework is simple yet effective, and the vital design is the Semantics Mining and Refinement (SMR) module. It extracts robust identity features about content and salient semantics, while mitigating interference from clothing appearances effectively. By capitalizing on the mined abundant semantic features, our proposed approach achieves state-of-the-art performance on three cloth-changing benchmarks as well as conventional benchmarks, demonstrating its superiority over advanced competitors.
Related papers
- CLIP-Driven Cloth-Agnostic Feature Learning for Cloth-Changing Person Re-Identification [47.948622774810296]
We propose a novel framework called CLIP-Driven Cloth-Agnostic Feature Learning (CCAF) for Cloth-Changing Person Re-Identification (CC-ReID)
Two modules were custom-designed: the Invariant Feature Prompting (IFP) and the Clothes Feature Minimization (CFM)
Experiments have demonstrated the effectiveness of the proposed CCAF, achieving new state-of-the-art performance on several popular CC-ReID benchmarks without any additional inference time.
arXiv Detail & Related papers (2024-06-13T14:56:07Z) - Clothes-Changing Person Re-Identification with Feasibility-Aware Intermediary Matching [86.04494755636613]
Current clothes-changing person re-identification (re-id) approaches usually perform retrieval based on clothes-irrelevant features.
We propose a Feasibility-Aware Intermediary Matching (FAIM) framework to additionally utilize clothes-relevant features for retrieval.
Our method outperforms state-of-the-art methods on several widely-used clothes-changing re-id benchmarks.
arXiv Detail & Related papers (2024-04-15T06:58:09Z) - Identity-aware Dual-constraint Network for Cloth-Changing Person Re-identification [13.709863134725335]
Cloth-Changing Person Re-Identification (CC-ReID) aims to accurately identify the target person in more realistic surveillance scenarios, where pedestrians usually change their clothing.
Despite great progress, limited cloth-changing training samples in existing CC-ReID datasets still prevent the model from adequately learning cloth-irrelevant features.
We propose an Identity-aware Dual-constraint Network (IDNet) for the CC-ReID task.
arXiv Detail & Related papers (2024-03-13T05:46:36Z) - When StyleGAN Meets Stable Diffusion: a $\mathscr{W}_+$ Adapter for
Personalized Image Generation [60.305112612629465]
Text-to-image diffusion models have excelled in producing diverse, high-quality, and photo-realistic images.
We present a novel use of the extended StyleGAN embedding space $mathcalW_+$ to achieve enhanced identity preservation and disentanglement for diffusion models.
Our method adeptly generates personalized text-to-image outputs that are not only compatible with prompt descriptions but also amenable to common StyleGAN editing directions.
arXiv Detail & Related papers (2023-11-29T09:05:14Z) - Semantic-aware Consistency Network for Cloth-changing Person
Re-Identification [8.885551377703944]
We present a Semantic-aware Consistency Network (SCNet) to learn identity-related semantic features.
We generate the black-clothing image by erasing pixels in the clothing area.
We further design a semantic consistency loss to facilitate the learning of high-level identity-related semantic features.
arXiv Detail & Related papers (2023-08-27T14:07:57Z) - Exploring Fine-Grained Representation and Recomposition for Cloth-Changing Person Re-Identification [78.52704557647438]
We propose a novel FIne-grained Representation and Recomposition (FIRe$2$) framework to tackle both limitations without any auxiliary annotation or data.
Experiments demonstrate that FIRe$2$ can achieve state-of-the-art performance on five widely-used cloth-changing person Re-ID benchmarks.
arXiv Detail & Related papers (2023-08-21T12:59:48Z) - Identity-Guided Collaborative Learning for Cloth-Changing Person
Reidentification [29.200286257496714]
We propose a novel identity-guided collaborative learning scheme (IGCL) for cloth-changing person ReID.
First, we design a novel clothing attention stream to reasonably reduce the interference caused by clothing information.
Second, we propose a human semantic attention and body jigsaw stream to highlight the human semantic information and simulate different poses of the same identity.
Third, a pedestrian identity enhancement stream is further proposed to enhance the identity importance and extract more favorable identity robust features.
arXiv Detail & Related papers (2023-04-10T06:05:54Z) - FaceDancer: Pose- and Occlusion-Aware High Fidelity Face Swapping [62.38898610210771]
We present a new single-stage method for subject face swapping and identity transfer, named FaceDancer.
We have two major contributions: Adaptive Feature Fusion Attention (AFFA) and Interpreted Feature Similarity Regularization (IFSR)
arXiv Detail & Related papers (2022-10-19T11:31:38Z) - A Semantic-aware Attention and Visual Shielding Network for
Cloth-changing Person Re-identification [29.026249268566303]
Cloth-changing person reidentification (ReID) is a newly emerging research topic that aims to retrieve pedestrians whose clothes are changed.
Since the human appearance with different clothes exhibits large variations, it is very difficult for existing approaches to extract discriminative and robust feature representations.
This work proposes a novel semantic-aware attention and visual shielding network for cloth-changing person ReID.
arXiv Detail & Related papers (2022-07-18T05:38:37Z) - Multigranular Visual-Semantic Embedding for Cloth-Changing Person
Re-identification [38.7806002518266]
This work proposes a novel visual-semantic embedding algorithm (MVSE) for cloth-changing person ReID.
To fully represent a person with clothing changes, a multigranular feature representation scheme (MGR) is employed, and then a cloth desensitization network (CDN) is designed.
A partially semantically aligned network (PSA) is proposed to obtain the visual-semantic information that is used to align the human attributes.
arXiv Detail & Related papers (2021-08-10T09:14:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.