Identity-Sensitive Knowledge Propagation for Cloth-Changing Person
Re-identification
- URL: http://arxiv.org/abs/2208.12023v1
- Date: Thu, 25 Aug 2022 12:01:49 GMT
- Title: Identity-Sensitive Knowledge Propagation for Cloth-Changing Person
Re-identification
- Authors: Jianbing Wu, Hong Liu, Wei Shi, Hao Tang, Jingwen Guo
- Abstract summary: Cloth-changing person re-identification (CC-ReID) aims to match person identities under clothing changes.
typical biometrics-based CC-ReID methods require cumbersome pose or body part estimators to learn cloth-irrelevant features from human biometric traits.
We propose an effective Identity-Sensitive Knowledge propagation framework (DeSKPro) for CC-ReID.
Our framework outperforms state-of-the-art methods by a large margin.
- Score: 17.588668735411783
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cloth-changing person re-identification (CC-ReID), which aims to match person
identities under clothing changes, is a new rising research topic in recent
years. However, typical biometrics-based CC-ReID methods often require
cumbersome pose or body part estimators to learn cloth-irrelevant features from
human biometric traits, which comes with high computational costs. Besides, the
performance is significantly limited due to the resolution degradation of
surveillance images. To address the above limitations, we propose an effective
Identity-Sensitive Knowledge Propagation framework (DeSKPro) for CC-ReID.
Specifically, a Cloth-irrelevant Spatial Attention module is introduced to
eliminate the distraction of clothing appearance by acquiring knowledge from
the human parsing module. To mitigate the resolution degradation issue and mine
identity-sensitive cues from human faces, we propose to restore the missing
facial details using prior facial knowledge, which is then propagated to a
smaller network. After training, the extra computations for human parsing or
face restoration are no longer required. Extensive experiments show that our
framework outperforms state-of-the-art methods by a large margin. Our code is
available at https://github.com/KimbingNg/DeskPro.
Related papers
- Disentangled Representations for Short-Term and Long-Term Person Re-Identification [33.76874948187976]
We propose a new generative adversarial network, dubbed identity shuffle GAN (IS-GAN)
It disentangles identity-related and unrelated features from person images through an identity-shuffling technique.
Experimental results validate the effectiveness of IS-GAN, showing state-of-the-art performance on standard reID benchmarks.
arXiv Detail & Related papers (2024-09-09T02:09:49Z) - Content and Salient Semantics Collaboration for Cloth-Changing Person Re-Identification [74.10897798660314]
Cloth-changing person Re-IDentification aims at recognizing the same person with clothing changes across non-overlapping cameras.
We propose the Content and Salient Semantics Collaboration framework, facilitating cross-parallel semantics interaction and refinement.
Our framework is simple yet effective, and the vital design is the Semantics Mining and Refinement (SMR) module.
arXiv Detail & Related papers (2024-05-26T15:17:28Z) - Identity-aware Dual-constraint Network for Cloth-Changing Person Re-identification [13.709863134725335]
Cloth-Changing Person Re-Identification (CC-ReID) aims to accurately identify the target person in more realistic surveillance scenarios, where pedestrians usually change their clothing.
Despite great progress, limited cloth-changing training samples in existing CC-ReID datasets still prevent the model from adequately learning cloth-irrelevant features.
We propose an Identity-aware Dual-constraint Network (IDNet) for the CC-ReID task.
arXiv Detail & Related papers (2024-03-13T05:46:36Z) - HFORD: High-Fidelity and Occlusion-Robust De-identification for Face
Privacy Protection [60.63915939982923]
Face de-identification is a practical way to solve the identity protection problem.
The existing facial de-identification methods have revealed several problems.
We present a High-Fidelity and Occlusion-Robust De-identification (HFORD) method to deal with these issues.
arXiv Detail & Related papers (2023-11-15T08:59:02Z) - Semantic-aware Consistency Network for Cloth-changing Person
Re-Identification [8.885551377703944]
We present a Semantic-aware Consistency Network (SCNet) to learn identity-related semantic features.
We generate the black-clothing image by erasing pixels in the clothing area.
We further design a semantic consistency loss to facilitate the learning of high-level identity-related semantic features.
arXiv Detail & Related papers (2023-08-27T14:07:57Z) - Clothes-Invariant Feature Learning by Causal Intervention for
Clothes-Changing Person Re-identification [118.23912884472794]
Clothes-invariant feature extraction is critical to the clothes-changing person re-identification (CC-ReID)
We argue that there exists a strong spurious correlation between clothes and human identity, that restricts the common likelihood-based ReID method P(Y|X) to extract clothes-irrelevant features.
We propose a new Causal Clothes-Invariant Learning (CCIL) method to achieve clothes-invariant feature learning.
arXiv Detail & Related papers (2023-05-10T13:48:24Z) - Identity-Guided Collaborative Learning for Cloth-Changing Person
Reidentification [29.200286257496714]
We propose a novel identity-guided collaborative learning scheme (IGCL) for cloth-changing person ReID.
First, we design a novel clothing attention stream to reasonably reduce the interference caused by clothing information.
Second, we propose a human semantic attention and body jigsaw stream to highlight the human semantic information and simulate different poses of the same identity.
Third, a pedestrian identity enhancement stream is further proposed to enhance the identity importance and extract more favorable identity robust features.
arXiv Detail & Related papers (2023-04-10T06:05:54Z) - Attribute-preserving Face Dataset Anonymization via Latent Code
Optimization [64.4569739006591]
We present a task-agnostic anonymization procedure that directly optimize the images' latent representation in the latent space of a pre-trained GAN.
We demonstrate through a series of experiments that our method is capable of anonymizing the identity of the images whilst -- crucially -- better-preserving the facial attributes.
arXiv Detail & Related papers (2023-03-20T17:34:05Z) - Multigranular Visual-Semantic Embedding for Cloth-Changing Person
Re-identification [38.7806002518266]
This work proposes a novel visual-semantic embedding algorithm (MVSE) for cloth-changing person ReID.
To fully represent a person with clothing changes, a multigranular feature representation scheme (MGR) is employed, and then a cloth desensitization network (CDN) is designed.
A partially semantically aligned network (PSA) is proposed to obtain the visual-semantic information that is used to align the human attributes.
arXiv Detail & Related papers (2021-08-10T09:14:44Z) - Long-Term Cloth-Changing Person Re-identification [154.57752691285046]
Person re-identification (Re-ID) aims to match a target person across camera views at different locations and times.
Existing Re-ID studies focus on the short-term cloth-consistent setting, under which a person re-appears in different camera views with the same outfit.
In this work, we focus on a much more difficult yet practical setting where person matching is conducted over long-duration, e.g., over days and months.
arXiv Detail & Related papers (2020-05-26T11:27:21Z) - Intra-Camera Supervised Person Re-Identification [87.88852321309433]
We propose a novel person re-identification paradigm based on an idea of independent per-camera identity annotation.
This eliminates the most time-consuming and tedious inter-camera identity labelling process.
We formulate a Multi-tAsk mulTi-labEl (MATE) deep learning method for Intra-Camera Supervised (ICS) person re-id.
arXiv Detail & Related papers (2020-02-12T15:26:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.