Occluded Person Re-Identification via Relational Adaptive Feature
Correction Learning
- URL: http://arxiv.org/abs/2212.04712v1
- Date: Fri, 9 Dec 2022 07:48:47 GMT
- Title: Occluded Person Re-Identification via Relational Adaptive Feature
Correction Learning
- Authors: Minjung Kim, MyeongAh Cho, Heansung Lee, Suhwan Cho, Sangyoun Lee
- Abstract summary: Occluded person re-identification (Re-ID) in images captured by multiple cameras is challenging because the target person is occluded by pedestrians or objects.
Most existing methods utilize the off-the-shelf pose or parsing networks as pseudo labels, which are prone to error.
We propose a novel Occlusion Correction Network (OCNet) that corrects features through relational-weight learning and obtains diverse and representative features without using external networks.
- Score: 8.015703163954639
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Occluded person re-identification (Re-ID) in images captured by multiple
cameras is challenging because the target person is occluded by pedestrians or
objects, especially in crowded scenes. In addition to the processes performed
during holistic person Re-ID, occluded person Re-ID involves the removal of
obstacles and the detection of partially visible body parts. Most existing
methods utilize the off-the-shelf pose or parsing networks as pseudo labels,
which are prone to error. To address these issues, we propose a novel Occlusion
Correction Network (OCNet) that corrects features through relational-weight
learning and obtains diverse and representative features without using external
networks. In addition, we present a simple concept of a center feature in order
to provide an intuitive solution to pedestrian occlusion scenarios.
Furthermore, we suggest the idea of Separation Loss (SL) for focusing on
different parts between global features and part features. We conduct extensive
experiments on five challenging benchmark datasets for occluded and holistic
Re-ID tasks to demonstrate that our method achieves superior performance to
state-of-the-art methods especially on occluded scene.
Related papers
- ProFD: Prompt-Guided Feature Disentangling for Occluded Person Re-Identification [34.38227097059117]
We propose a Prompt-guided Feature Disentangling method (ProFD) to generate well-aligned part features.
ProFD first designs part-specific prompts and utilizes noisy segmentation mask to preliminarily align visual and textual embedding.
We employ a self-distillation strategy, retaining pre-trained knowledge of CLIP to mitigate over-fitting.
arXiv Detail & Related papers (2024-09-30T08:31:14Z) - Keypoint Promptable Re-Identification [76.31113049256375]
Occluded Person Re-Identification (ReID) is a metric learning task that involves matching occluded individuals based on their appearance.
We introduce Keypoint Promptable ReID (KPR), a novel formulation of the ReID problem that explicitly complements the input bounding box with a set of semantic keypoints.
We release custom keypoint labels for four popular ReID benchmarks. Experiments on person retrieval, but also on pose tracking, demonstrate that our method systematically surpasses previous state-of-the-art approaches.
arXiv Detail & Related papers (2024-07-25T15:20:58Z) - Learning Cross-modality Information Bottleneck Representation for
Heterogeneous Person Re-Identification [61.49219876388174]
Visible-Infrared person re-identification (VI-ReID) is an important and challenging task in intelligent video surveillance.
Existing methods mainly focus on learning a shared feature space to reduce the modality discrepancy between visible and infrared modalities.
We present a novel mutual information and modality consensus network, namely CMInfoNet, to extract modality-invariant identity features.
arXiv Detail & Related papers (2023-08-29T06:55:42Z) - Attention Disturbance and Dual-Path Constraint Network for Occluded
Person Re-identification [36.86516784815214]
We propose a transformer-based Attention Disturbance and Dual-Path Constraint Network (ADP) to enhance the generalization of attention networks.
To imitate real-world obstacles, we introduce an Attention Disturbance Mask (ADM) module that generates an offensive noise.
We also develop a Dual-Path Constraint Module (DPC) that can obtain preferable supervision information from holistic images.
arXiv Detail & Related papers (2023-03-20T09:56:35Z) - Feature Completion Transformer for Occluded Person Re-identification [25.159974510754992]
Occluded person re-identification (Re-ID) is a challenging problem due to the destruction of occluders.
We propose a Feature Completion Transformer (FCFormer) to implicitly complement the semantic information of occluded parts in the feature space.
FCFormer achieves superior performance and outperforms the state-of-the-art methods by significant margins on occluded datasets.
arXiv Detail & Related papers (2023-03-03T01:12:57Z) - Body Part-Based Representation Learning for Occluded Person
Re-Identification [102.27216744301356]
Occluded person re-identification (ReID) is a person retrieval task which aims at matching occluded person images with holistic ones.
Part-based methods have been shown beneficial as they offer fine-grained information and are well suited to represent partially visible human bodies.
We propose BPBreID, a body part-based ReID model for solving the above issues.
arXiv Detail & Related papers (2022-11-07T16:48:41Z) - Deep Learning-based Occluded Person Re-identification: A Survey [19.76711535866007]
Occluded person re-identification (Re-ID) aims at addressing the occlusion problem when retrieving the person of interest across multiple cameras.
This paper provides a systematic survey of occluded person Re-ID methods.
We summarize four issues caused by occlusion in person Re-ID, i.e., position misalignment, scale misalignment, noisy information, and missing information.
arXiv Detail & Related papers (2022-07-29T03:10:18Z) - Pose-Guided Feature Learning with Knowledge Distillation for Occluded
Person Re-Identification [137.8810152620818]
We propose a network named Pose-Guided Feature Learning with Knowledge Distillation (PGFL-KD)
The PGFL-KD consists of a main branch (MB), and two pose-guided branches, ieno, a foreground-enhanced branch (FEB), and a body part semantics aligned branch (SAB)
Experiments on occluded, partial, and holistic ReID tasks show the effectiveness of our proposed network.
arXiv Detail & Related papers (2021-07-31T03:34:27Z) - Learning Disentangled Representation Implicitly via Transformer for
Occluded Person Re-Identification [35.40162083252931]
DRL-Net is a representation learning network that handles occluded re-ID without requiring strict person image alignment or any additional supervision.
It measures image similarity by automatically disentangling the representation of undefined semantic components.
The DRL-Net achieves superior re-ID performance consistently and outperforms the state-of-the-art by large margins for Occluded-DukeMTMC.
arXiv Detail & Related papers (2021-07-06T04:24:10Z) - Fully Unsupervised Person Re-identification viaSelective Contrastive
Learning [58.5284246878277]
Person re-identification (ReID) aims at searching the same identity person among images captured by various cameras.
We propose a novel selective contrastive learning framework for unsupervised feature learning.
Experimental results demonstrate the superiority of our method in unsupervised person ReID compared with the state-of-the-arts.
arXiv Detail & Related papers (2020-10-15T09:09:23Z) - Do Not Disturb Me: Person Re-identification Under the Interference of
Other Pedestrians [97.45805377769354]
This paper presents a novel deep network termed Pedestrian-Interference Suppression Network (PISNet)
PISNet leverages a Query-Guided Attention Block (QGAB) to enhance the feature of the target in the gallery, under the guidance of the query.
Our method is evaluated on two new pedestrian-interference datasets and the results show that the proposed method performs favorably against existing Re-ID methods.
arXiv Detail & Related papers (2020-08-16T17:45:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.