Pose-guided Visible Part Matching for Occluded Person ReID
- URL: http://arxiv.org/abs/2004.00230v1
- Date: Wed, 1 Apr 2020 04:36:51 GMT
- Title: Pose-guided Visible Part Matching for Occluded Person ReID
- Authors: Shang Gao, Jingya Wang, Huchuan Lu, Zimo Liu
- Abstract summary: We propose a Pose-guided Visible Part Matching (PVPM) method that jointly learns the discriminative features with pose-guided attention and self-mines the part visibility.
Experimental results on three reported occluded benchmarks show that the proposed method achieves competitive performance to state-of-the-art methods.
- Score: 80.81748252960843
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Occluded person re-identification is a challenging task as the appearance
varies substantially with various obstacles, especially in the crowd scenario.
To address this issue, we propose a Pose-guided Visible Part Matching (PVPM)
method that jointly learns the discriminative features with pose-guided
attention and self-mines the part visibility in an end-to-end framework.
Specifically, the proposed PVPM includes two key components: 1) pose-guided
attention (PGA) method for part feature pooling that exploits more
discriminative local features; 2) pose-guided visibility predictor (PVP) that
estimates whether a part suffers the occlusion or not. As there are no ground
truth training annotations for the occluded part, we turn to utilize the
characteristic of part correspondence in positive pairs and self-mining the
correspondence scores via graph matching. The generated correspondence scores
are then utilized as pseudo-labels for visibility predictor (PVP). Experimental
results on three reported occluded benchmarks show that the proposed method
achieves competitive performance to state-of-the-art methods. The source codes
are available at https://github.com/hh23333/PVPM
Related papers
- ProFD: Prompt-Guided Feature Disentangling for Occluded Person Re-Identification [34.38227097059117]
We propose a Prompt-guided Feature Disentangling method (ProFD) to generate well-aligned part features.
ProFD first designs part-specific prompts and utilizes noisy segmentation mask to preliminarily align visual and textual embedding.
We employ a self-distillation strategy, retaining pre-trained knowledge of CLIP to mitigate over-fitting.
arXiv Detail & Related papers (2024-09-30T08:31:14Z) - DualFocus: Integrating Plausible Descriptions in Text-based Person Re-identification [6.381155145404096]
We introduce DualFocus, a unified framework that integrates plausible descriptions to enhance the interpretative accuracy of vision-language models in Person Re-identification tasks.
To achieve a balance between coarse and fine-grained alignment of visual and textual embeddings, we propose the Dynamic Tokenwise Similarity (DTS) loss.
The comprehensive experiments on CUHK-PEDES, ICFG-PEDES, and RSTPReid, DualFocus demonstrates superior performance over state-of-the-art methods.
arXiv Detail & Related papers (2024-05-13T04:21:00Z) - DiP: Learning Discriminative Implicit Parts for Person Re-Identification [18.89539401924382]
We propose to learn Discriminative implicit Parts (DiPs) which are decoupled from explicit body parts.
We also propose a novel implicit position to give a geometric interpretation for each DiP.
The proposed method achieves state-of-the-art performance on multiple person ReID benchmarks.
arXiv Detail & Related papers (2022-12-24T17:59:01Z) - Body Part-Based Representation Learning for Occluded Person
Re-Identification [102.27216744301356]
Occluded person re-identification (ReID) is a person retrieval task which aims at matching occluded person images with holistic ones.
Part-based methods have been shown beneficial as they offer fine-grained information and are well suited to represent partially visible human bodies.
We propose BPBreID, a body part-based ReID model for solving the above issues.
arXiv Detail & Related papers (2022-11-07T16:48:41Z) - Quality-aware Part Models for Occluded Person Re-identification [77.24920810798505]
Occlusion poses a major challenge for person re-identification (ReID)
Existing approaches typically rely on outside tools to infer visible body parts, which may be suboptimal in terms of both computational efficiency and ReID accuracy.
We propose a novel method named Quality-aware Part Models (QPM) for occlusion-robust ReID.
arXiv Detail & Related papers (2022-01-01T03:51:09Z) - Dual Contrastive Learning for General Face Forgery Detection [64.41970626226221]
We propose a novel face forgery detection framework, named Dual Contrastive Learning (DCL), which constructs positive and negative paired data.
To explore the essential discrepancies, Intra-Instance Contrastive Learning (Intra-ICL) is introduced to focus on the local content inconsistencies prevalent in the forged faces.
arXiv Detail & Related papers (2021-12-27T05:44:40Z) - Pose-guided Feature Disentangling for Occluded Person Re-identification
Based on Transformer [15.839842504357144]
Occluded person re-identification is a challenging task as human body parts could be occluded by some obstacles.
Some existing pose-guided methods solve this problem by aligning body parts according to graph matching.
We propose a transformer-based Pose-guided Feature Disentangling (PFD) method by utilizing pose information to clearly disentangle semantic components.
arXiv Detail & Related papers (2021-12-05T03:23:31Z) - Pose-Guided Feature Learning with Knowledge Distillation for Occluded
Person Re-Identification [137.8810152620818]
We propose a network named Pose-Guided Feature Learning with Knowledge Distillation (PGFL-KD)
The PGFL-KD consists of a main branch (MB), and two pose-guided branches, ieno, a foreground-enhanced branch (FEB), and a body part semantics aligned branch (SAB)
Experiments on occluded, partial, and holistic ReID tasks show the effectiveness of our proposed network.
arXiv Detail & Related papers (2021-07-31T03:34:27Z) - Inter-class Discrepancy Alignment for Face Recognition [55.578063356210144]
We propose a unified framework calledInter-class DiscrepancyAlignment(IDA)
IDA-DAO is used to align the similarity scores considering the discrepancy between the images and its neighbors.
IDA-SSE can provide convincing inter-class neighbors by introducing virtual candidate images generated with GAN.
arXiv Detail & Related papers (2021-03-02T08:20:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.