Pose-guided Visible Part Matching for Occluded Person ReID
- URL: http://arxiv.org/abs/2004.00230v1
- Date: Wed, 1 Apr 2020 04:36:51 GMT
- Title: Pose-guided Visible Part Matching for Occluded Person ReID
- Authors: Shang Gao, Jingya Wang, Huchuan Lu, Zimo Liu
- Abstract summary: We propose a Pose-guided Visible Part Matching (PVPM) method that jointly learns the discriminative features with pose-guided attention and self-mines the part visibility.
Experimental results on three reported occluded benchmarks show that the proposed method achieves competitive performance to state-of-the-art methods.
- Score: 80.81748252960843
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Occluded person re-identification is a challenging task as the appearance
varies substantially with various obstacles, especially in the crowd scenario.
To address this issue, we propose a Pose-guided Visible Part Matching (PVPM)
method that jointly learns the discriminative features with pose-guided
attention and self-mines the part visibility in an end-to-end framework.
Specifically, the proposed PVPM includes two key components: 1) pose-guided
attention (PGA) method for part feature pooling that exploits more
discriminative local features; 2) pose-guided visibility predictor (PVP) that
estimates whether a part suffers the occlusion or not. As there are no ground
truth training annotations for the occluded part, we turn to utilize the
characteristic of part correspondence in positive pairs and self-mining the
correspondence scores via graph matching. The generated correspondence scores
are then utilized as pseudo-labels for visibility predictor (PVP). Experimental
results on three reported occluded benchmarks show that the proposed method
achieves competitive performance to state-of-the-art methods. The source codes
are available at https://github.com/hh23333/PVPM
Related papers
- DiP: Learning Discriminative Implicit Parts for Person Re-Identification [18.89539401924382]
We propose to learn Discriminative implicit Parts (DiPs) which are decoupled from explicit body parts.
We also propose a novel implicit position to give a geometric interpretation for each DiP.
The proposed method achieves state-of-the-art performance on multiple person ReID benchmarks.
arXiv Detail & Related papers (2022-12-24T17:59:01Z) - Body Part-Based Representation Learning for Occluded Person
Re-Identification [102.27216744301356]
Occluded person re-identification (ReID) is a person retrieval task which aims at matching occluded person images with holistic ones.
Part-based methods have been shown beneficial as they offer fine-grained information and are well suited to represent partially visible human bodies.
We propose BPBreID, a body part-based ReID model for solving the above issues.
arXiv Detail & Related papers (2022-11-07T16:48:41Z) - Coarse-to-Fine Point Cloud Registration with SE(3)-Equivariant
Representations [24.772676537277547]
Point cloud registration is a crucial problem in computer vision and robotics.
We adopt a coarse-to-fine pipeline that concurrently handles both issues.
Our proposed method increases the recall rate by 20% compared to state-of-the-art methods.
arXiv Detail & Related papers (2022-10-05T06:35:01Z) - Quality-aware Part Models for Occluded Person Re-identification [77.24920810798505]
Occlusion poses a major challenge for person re-identification (ReID)
Existing approaches typically rely on outside tools to infer visible body parts, which may be suboptimal in terms of both computational efficiency and ReID accuracy.
We propose a novel method named Quality-aware Part Models (QPM) for occlusion-robust ReID.
arXiv Detail & Related papers (2022-01-01T03:51:09Z) - Dual Contrastive Learning for General Face Forgery Detection [64.41970626226221]
We propose a novel face forgery detection framework, named Dual Contrastive Learning (DCL), which constructs positive and negative paired data.
To explore the essential discrepancies, Intra-Instance Contrastive Learning (Intra-ICL) is introduced to focus on the local content inconsistencies prevalent in the forged faces.
arXiv Detail & Related papers (2021-12-27T05:44:40Z) - Pose-guided Feature Disentangling for Occluded Person Re-identification
Based on Transformer [15.839842504357144]
Occluded person re-identification is a challenging task as human body parts could be occluded by some obstacles.
Some existing pose-guided methods solve this problem by aligning body parts according to graph matching.
We propose a transformer-based Pose-guided Feature Disentangling (PFD) method by utilizing pose information to clearly disentangle semantic components.
arXiv Detail & Related papers (2021-12-05T03:23:31Z) - Pose-Guided Feature Learning with Knowledge Distillation for Occluded
Person Re-Identification [137.8810152620818]
We propose a network named Pose-Guided Feature Learning with Knowledge Distillation (PGFL-KD)
The PGFL-KD consists of a main branch (MB), and two pose-guided branches, ieno, a foreground-enhanced branch (FEB), and a body part semantics aligned branch (SAB)
Experiments on occluded, partial, and holistic ReID tasks show the effectiveness of our proposed network.
arXiv Detail & Related papers (2021-07-31T03:34:27Z) - Inter-class Discrepancy Alignment for Face Recognition [55.578063356210144]
We propose a unified framework calledInter-class DiscrepancyAlignment(IDA)
IDA-DAO is used to align the similarity scores considering the discrepancy between the images and its neighbors.
IDA-SSE can provide convincing inter-class neighbors by introducing virtual candidate images generated with GAN.
arXiv Detail & Related papers (2021-03-02T08:20:08Z) - Unsupervised segmentation via semantic-apparent feature fusion [21.75371777263847]
This research proposes an unsupervised foreground segmentation method based on semantic-apparent feature fusion (SAFF)
Key regions of foreground object can be accurately responded via semantic features, while apparent features provide richer detailed expression.
By fusing semantic and apparent features, as well as cascading the modules of intra-image adaptive feature weight learning and inter-image common feature learning, the research achieves performance that significantly exceeds baselines.
arXiv Detail & Related papers (2020-05-21T08:28:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.