Semantic-aware Consistency Network for Cloth-changing Person
Re-Identification
- URL: http://arxiv.org/abs/2308.14113v3
- Date: Fri, 17 Nov 2023 02:37:13 GMT
- Title: Semantic-aware Consistency Network for Cloth-changing Person
Re-Identification
- Authors: Peini Guo, Hong Liu, Jianbing Wu, Guoquan Wang and Tao Wang
- Abstract summary: We present a Semantic-aware Consistency Network (SCNet) to learn identity-related semantic features.
We generate the black-clothing image by erasing pixels in the clothing area.
We further design a semantic consistency loss to facilitate the learning of high-level identity-related semantic features.
- Score: 8.885551377703944
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cloth-changing Person Re-Identification (CC-ReID) is a challenging task that
aims to retrieve the target person across multiple surveillance cameras when
clothing changes might happen. Despite recent progress in CC-ReID, existing
approaches are still hindered by the interference of clothing variations since
they lack effective constraints to keep the model consistently focused on
clothing-irrelevant regions. To address this issue, we present a Semantic-aware
Consistency Network (SCNet) to learn identity-related semantic features by
proposing effective consistency constraints. Specifically, we generate the
black-clothing image by erasing pixels in the clothing area, which explicitly
mitigates the interference from clothing variations. In addition, to fully
exploit the fine-grained identity information, a head-enhanced attention module
is introduced, which learns soft attention maps by utilizing the proposed
part-based matching loss to highlight head information. We further design a
semantic consistency loss to facilitate the learning of high-level
identity-related semantic features, forcing the model to focus on semantically
consistent cloth-irrelevant regions. By using the consistency constraint, our
model does not require any extra auxiliary segmentation module to generate the
black-clothing image or locate the head region during the inference stage.
Extensive experiments on four cloth-changing person Re-ID datasets (LTCC, PRCC,
Vc-Clothes, and DeepChange) demonstrate that our proposed SCNet makes
significant improvements over prior state-of-the-art approaches. Our code is
available at: https://github.com/Gpn-star/SCNet.
Related papers
- Content and Salient Semantics Collaboration for Cloth-Changing Person Re-Identification [74.10897798660314]
Cloth-changing person Re-IDentification aims at recognizing the same person with clothing changes across non-overlapping cameras.
We propose the Content and Salient Semantics Collaboration framework, facilitating cross-parallel semantics interaction and refinement.
Our framework is simple yet effective, and the vital design is the Semantics Mining and Refinement (SMR) module.
arXiv Detail & Related papers (2024-05-26T15:17:28Z) - Identity-aware Dual-constraint Network for Cloth-Changing Person Re-identification [13.709863134725335]
Cloth-Changing Person Re-Identification (CC-ReID) aims to accurately identify the target person in more realistic surveillance scenarios, where pedestrians usually change their clothing.
Despite great progress, limited cloth-changing training samples in existing CC-ReID datasets still prevent the model from adequately learning cloth-irrelevant features.
We propose an Identity-aware Dual-constraint Network (IDNet) for the CC-ReID task.
arXiv Detail & Related papers (2024-03-13T05:46:36Z) - CCPA: Long-term Person Re-Identification via Contrastive Clothing and
Pose Augmentation [2.1756081703276]
Long-term Person Re-Identification aims at matching an individual across cameras after a long period of time.
We propose CCPA: Contrastive Clothing and Pose Augmentation framework for LRe-ID.
arXiv Detail & Related papers (2024-02-22T11:16:34Z) - HFORD: High-Fidelity and Occlusion-Robust De-identification for Face
Privacy Protection [60.63915939982923]
Face de-identification is a practical way to solve the identity protection problem.
The existing facial de-identification methods have revealed several problems.
We present a High-Fidelity and Occlusion-Robust De-identification (HFORD) method to deal with these issues.
arXiv Detail & Related papers (2023-11-15T08:59:02Z) - Clothes-Invariant Feature Learning by Causal Intervention for
Clothes-Changing Person Re-identification [118.23912884472794]
Clothes-invariant feature extraction is critical to the clothes-changing person re-identification (CC-ReID)
We argue that there exists a strong spurious correlation between clothes and human identity, that restricts the common likelihood-based ReID method P(Y|X) to extract clothes-irrelevant features.
We propose a new Causal Clothes-Invariant Learning (CCIL) method to achieve clothes-invariant feature learning.
arXiv Detail & Related papers (2023-05-10T13:48:24Z) - A Semantic-aware Attention and Visual Shielding Network for
Cloth-changing Person Re-identification [29.026249268566303]
Cloth-changing person reidentification (ReID) is a newly emerging research topic that aims to retrieve pedestrians whose clothes are changed.
Since the human appearance with different clothes exhibits large variations, it is very difficult for existing approaches to extract discriminative and robust feature representations.
This work proposes a novel semantic-aware attention and visual shielding network for cloth-changing person ReID.
arXiv Detail & Related papers (2022-07-18T05:38:37Z) - Grasp-Oriented Fine-grained Cloth Segmentation without Real Supervision [66.56535902642085]
This paper tackles the problem of fine-grained region detection in deformed clothes using only a depth image.
We define up to 6 semantic regions of varying extent, including edges on the neckline, sleeve cuffs, and hem, plus top and bottom grasping points.
We introduce a U-net based network to segment and label these parts.
We show that training our network solely with synthetic data and the proposed DA yields results competitive with models trained on real data.
arXiv Detail & Related papers (2021-10-06T16:31:20Z) - Cloth-Changing Person Re-identification from A Single Image with Gait
Prediction and Regularization [65.50321170655225]
We introduce Gait recognition as an auxiliary task to drive the Image ReID model to learn cloth-agnostic representations.
Experiments on image-based Cloth-Changing ReID benchmarks, e.g., LTCC, PRCC, Real28, and VC-Clothes, demonstrate that GI-ReID performs favorably against the state-of-the-arts.
arXiv Detail & Related papers (2021-03-29T12:10:50Z) - Long-Term Cloth-Changing Person Re-identification [154.57752691285046]
Person re-identification (Re-ID) aims to match a target person across camera views at different locations and times.
Existing Re-ID studies focus on the short-term cloth-consistent setting, under which a person re-appears in different camera views with the same outfit.
In this work, we focus on a much more difficult yet practical setting where person matching is conducted over long-duration, e.g., over days and months.
arXiv Detail & Related papers (2020-05-26T11:27:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.