Quality-aware Part Models for Occluded Person Re-identification
- URL: http://arxiv.org/abs/2201.00107v1
- Date: Sat, 1 Jan 2022 03:51:09 GMT
- Title: Quality-aware Part Models for Occluded Person Re-identification
- Authors: Pengfei Wang, Changxing Ding, Zhiyin Shao, Zhibin Hong, Shengli Zhang,
Dacheng Tao
- Abstract summary: Occlusion poses a major challenge for person re-identification (ReID)
Existing approaches typically rely on outside tools to infer visible body parts, which may be suboptimal in terms of both computational efficiency and ReID accuracy.
We propose a novel method named Quality-aware Part Models (QPM) for occlusion-robust ReID.
- Score: 77.24920810798505
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Occlusion poses a major challenge for person re-identification (ReID).
Existing approaches typically rely on outside tools to infer visible body
parts, which may be suboptimal in terms of both computational efficiency and
ReID accuracy. In particular, they may fail when facing complex occlusions,
such as those between pedestrians. Accordingly, in this paper, we propose a
novel method named Quality-aware Part Models (QPM) for occlusion-robust ReID.
First, we propose to jointly learn part features and predict part quality
scores. As no quality annotation is available, we introduce a strategy that
automatically assigns low scores to occluded body parts, thereby weakening the
impact of occluded body parts on ReID results. Second, based on the predicted
part quality scores, we propose a novel identity-aware spatial attention (ISA)
module. In this module, a coarse identity-aware feature is utilized to
highlight pixels of the target pedestrian, so as to handle the occlusion
between pedestrians. Third, we design an adaptive and efficient approach for
generating global features from common non-occluded regions with respect to
each image pair. This design is crucial, but is often ignored by existing
methods. QPM has three key advantages: 1) it does not rely on any outside tools
in either the training or inference stages; 2) it handles occlusions caused by
both objects and other pedestrians;3) it is highly computationally efficient.
Experimental results on four popular databases for occluded ReID demonstrate
that QPM consistently outperforms state-of-the-art methods by significant
margins. The code of QPM will be released.
Related papers
- AiOS: All-in-One-Stage Expressive Human Pose and Shape Estimation [55.179287851188036]
We introduce a novel all-in-one-stage framework, AiOS, for expressive human pose and shape recovery without an additional human detection step.
We first employ a human token to probe a human location in the image and encode global features for each instance.
Then, we introduce a joint-related token to probe the human joint in the image and encoder a fine-grained local feature.
arXiv Detail & Related papers (2024-03-26T17:59:23Z) - SDR-GAIN: A High Real-Time Occluded Pedestrian Pose Completion Method
for Autonomous Driving [3.3113002380233447]
We present a novel pedestrian pose keypoint completion method called the separation and dimensionality reduction-based generative adversarial imputation networks (SDR-GAIN)
The SDR-GAIN algorithm exhibits a remarkably short running time of approximately 0.4ms and boasts exceptional real-time performance.
arXiv Detail & Related papers (2023-06-06T09:35:56Z) - Body Part-Based Representation Learning for Occluded Person
Re-Identification [102.27216744301356]
Occluded person re-identification (ReID) is a person retrieval task which aims at matching occluded person images with holistic ones.
Part-based methods have been shown beneficial as they offer fine-grained information and are well suited to represent partially visible human bodies.
We propose BPBreID, a body part-based ReID model for solving the above issues.
arXiv Detail & Related papers (2022-11-07T16:48:41Z) - Learning to Estimate Hidden Motions with Global Motion Aggregation [71.12650817490318]
Occlusions pose a significant challenge to optical flow algorithms that rely on local evidences.
We introduce a global motion aggregation module to find long-range dependencies between pixels in the first image.
We demonstrate that the optical flow estimates in the occluded regions can be significantly improved without damaging the performance in non-occluded regions.
arXiv Detail & Related papers (2021-04-06T10:32:03Z) - Robust Person Re-Identification through Contextual Mutual Boosting [77.1976737965566]
We propose the Contextual Mutual Boosting Network (CMBN) to localize pedestrians.
It localizes pedestrians and recalibrates features by effectively exploiting contextual information and statistical inference.
Experiments on the benchmarks demonstrate the superiority of the architecture compared the state-of-the-art.
arXiv Detail & Related papers (2020-09-16T06:33:35Z) - An Attention-Based Deep Learning Model for Multiple Pedestrian
Attributes Recognition [4.6898263272139795]
This paper provides a novel solution to the problem of automatic characterization of pedestrians in surveillance footage.
We propose a multi-task deep model that uses an element-wise multiplication layer to extract more comprehensive feature representations.
Our experiments were performed on two well-known datasets (RAP and PETA) and point for the superiority of the proposed method with respect to the state-of-the-art.
arXiv Detail & Related papers (2020-04-02T16:21:14Z) - Pose-guided Visible Part Matching for Occluded Person ReID [80.81748252960843]
We propose a Pose-guided Visible Part Matching (PVPM) method that jointly learns the discriminative features with pose-guided attention and self-mines the part visibility.
Experimental results on three reported occluded benchmarks show that the proposed method achieves competitive performance to state-of-the-art methods.
arXiv Detail & Related papers (2020-04-01T04:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.