Body Part-Based Representation Learning for Occluded Person
Re-Identification
- URL: http://arxiv.org/abs/2211.03679v1
- Date: Mon, 7 Nov 2022 16:48:41 GMT
- Title: Body Part-Based Representation Learning for Occluded Person
Re-Identification
- Authors: Vladimir Somers and Christophe De Vleeschouwer and Alexandre Alahi
- Abstract summary: Occluded person re-identification (ReID) is a person retrieval task which aims at matching occluded person images with holistic ones.
Part-based methods have been shown beneficial as they offer fine-grained information and are well suited to represent partially visible human bodies.
We propose BPBreID, a body part-based ReID model for solving the above issues.
- Score: 102.27216744301356
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Occluded person re-identification (ReID) is a person retrieval task which
aims at matching occluded person images with holistic ones. For addressing
occluded ReID, part-based methods have been shown beneficial as they offer
fine-grained information and are well suited to represent partially visible
human bodies. However, training a part-based model is a challenging task for
two reasons. Firstly, individual body part appearance is not as discriminative
as global appearance (two distinct IDs might have the same local appearance),
this means standard ReID training objectives using identity labels are not
adapted to local feature learning. Secondly, ReID datasets are not provided
with human topographical annotations. In this work, we propose BPBreID, a body
part-based ReID model for solving the above issues. We first design two modules
for predicting body part attention maps and producing body part-based features
of the ReID target. We then propose GiLt, a novel training scheme for learning
part-based representations that is robust to occlusions and non-discriminative
local appearance. Extensive experiments on popular holistic and occluded
datasets show the effectiveness of our proposed method, which outperforms
state-of-the-art methods by 0.7% mAP and 5.6% rank-1 accuracy on the
challenging Occluded-Duke dataset. Our code is available at
https://github.com/VlSomers/bpbreid.
Related papers
- Enhancing Long-Term Person Re-Identification Using Global, Local Body
Part, and Head Streams [8.317899947627202]
We propose a novel framework that effectively learns and utilizes both global and local information.
The proposed framework is trained by backpropagating the weighted summation of the identity classification loss.
arXiv Detail & Related papers (2024-03-05T11:57:10Z) - Exploring Fine-Grained Representation and Recomposition for Cloth-Changing Person Re-Identification [78.52704557647438]
We propose a novel FIne-grained Representation and Recomposition (FIRe$2$) framework to tackle both limitations without any auxiliary annotation or data.
Experiments demonstrate that FIRe$2$ can achieve state-of-the-art performance on five widely-used cloth-changing person Re-ID benchmarks.
arXiv Detail & Related papers (2023-08-21T12:59:48Z) - Occluded Person Re-Identification via Relational Adaptive Feature
Correction Learning [8.015703163954639]
Occluded person re-identification (Re-ID) in images captured by multiple cameras is challenging because the target person is occluded by pedestrians or objects.
Most existing methods utilize the off-the-shelf pose or parsing networks as pseudo labels, which are prone to error.
We propose a novel Occlusion Correction Network (OCNet) that corrects features through relational-weight learning and obtains diverse and representative features without using external networks.
arXiv Detail & Related papers (2022-12-09T07:48:47Z) - Quality-aware Part Models for Occluded Person Re-identification [77.24920810798505]
Occlusion poses a major challenge for person re-identification (ReID)
Existing approaches typically rely on outside tools to infer visible body parts, which may be suboptimal in terms of both computational efficiency and ReID accuracy.
We propose a novel method named Quality-aware Part Models (QPM) for occlusion-robust ReID.
arXiv Detail & Related papers (2022-01-01T03:51:09Z) - PGGANet: Pose Guided Graph Attention Network for Person
Re-identification [0.0]
Person re-identification (ReID) aims at retrieving a person from images captured by different cameras.
It has been proved that using local features together with global feature of person image could help to give robust feature representations for person retrieval.
We propose a pose guided graph attention network, a multi-branch architecture consisting of one branch for global feature, one branch for mid-granular body features and one branch for fine-granular key point features.
arXiv Detail & Related papers (2021-11-29T09:47:39Z) - Unsupervised Pre-training for Person Re-identification [90.98552221699508]
We present a large scale unlabeled person re-identification (Re-ID) dataset "LUPerson"
We make the first attempt of performing unsupervised pre-training for improving the generalization ability of the learned person Re-ID feature representation.
arXiv Detail & Related papers (2020-12-07T14:48:26Z) - Identity-Guided Human Semantic Parsing for Person Re-Identification [42.705908907250986]
We propose the identity-guided human semantic parsing approach (ISP) to locate both the human body parts and personal belongings at pixel-level for aligned person re-ID.
arXiv Detail & Related papers (2020-07-27T12:12:27Z) - Pose-guided Visible Part Matching for Occluded Person ReID [80.81748252960843]
We propose a Pose-guided Visible Part Matching (PVPM) method that jointly learns the discriminative features with pose-guided attention and self-mines the part visibility.
Experimental results on three reported occluded benchmarks show that the proposed method achieves competitive performance to state-of-the-art methods.
arXiv Detail & Related papers (2020-04-01T04:36:51Z) - High-Order Information Matters: Learning Relation and Topology for
Occluded Person Re-Identification [84.43394420267794]
We propose a novel framework by learning high-order relation and topology information for discriminative features and robust alignment.
Our framework significantly outperforms state-of-the-art by6.5%mAP scores on Occluded-Duke dataset.
arXiv Detail & Related papers (2020-03-18T12:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.