Part-Attention Based Model Make Occluded Person Re-Identification Stronger
- URL: http://arxiv.org/abs/2404.03443v4
- Date: Wed, 1 May 2024 08:39:50 GMT
- Title: Part-Attention Based Model Make Occluded Person Re-Identification Stronger
- Authors: Zhihao Chen, Yiyuan Ge,
- Abstract summary: We introduce PAB-ReID, which is a novel ReID model incorporating part-attention mechanisms to tackle the issues effectively.
Firstly, we introduce the human parsing label to guide the generation of more accurate human part attention maps.
In addition, we propose a fine-grained feature focuser for generating fine-grained human local feature representations while suppressing background interference.
- Score: 1.7648680700685022
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of occluded person re-identification (ReID) is to retrieve specific pedestrians in occluded situations. However, occluded person ReID still suffers from background clutter and low-quality local feature representations, which limits model performance. In our research, we introduce a new framework called PAB-ReID, which is a novel ReID model incorporating part-attention mechanisms to tackle the aforementioned issues effectively. Firstly, we introduce the human parsing label to guide the generation of more accurate human part attention maps. In addition, we propose a fine-grained feature focuser for generating fine-grained human local feature representations while suppressing background interference. Moreover, We also design a part triplet loss to supervise the learning of human local features, which optimizes intra/inter-class distance. We conducted extensive experiments on specialized occlusion and regular ReID datasets, showcasing that our approach outperforms the existing state-of-the-art methods.
Related papers
- ProFD: Prompt-Guided Feature Disentangling for Occluded Person Re-Identification [34.38227097059117]
We propose a Prompt-guided Feature Disentangling method (ProFD) to generate well-aligned part features.
ProFD first designs part-specific prompts and utilizes noisy segmentation mask to preliminarily align visual and textual embedding.
We employ a self-distillation strategy, retaining pre-trained knowledge of CLIP to mitigate over-fitting.
arXiv Detail & Related papers (2024-09-30T08:31:14Z) - DPMesh: Exploiting Diffusion Prior for Occluded Human Mesh Recovery [71.6345505427213]
DPMesh is an innovative framework for occluded human mesh recovery.
It capitalizes on the profound diffusion prior about object structure and spatial relationships embedded in a pre-trained text-to-image diffusion model.
arXiv Detail & Related papers (2024-04-01T18:59:13Z) - AiOS: All-in-One-Stage Expressive Human Pose and Shape Estimation [55.179287851188036]
We introduce a novel all-in-one-stage framework, AiOS, for expressive human pose and shape recovery without an additional human detection step.
We first employ a human token to probe a human location in the image and encode global features for each instance.
Then, we introduce a joint-related token to probe the human joint in the image and encoder a fine-grained local feature.
arXiv Detail & Related papers (2024-03-26T17:59:23Z) - Subspace-Guided Feature Reconstruction for Unsupervised Anomaly
Localization [5.085309164633571]
Unsupervised anomaly localization plays a critical role in industrial manufacturing.
Most recent methods perform feature matching or reconstruction for the target sample with pre-trained deep neural networks.
We propose a novel subspace-guided feature reconstruction framework to pursue adaptive feature approximation for anomaly localization.
arXiv Detail & Related papers (2023-09-25T06:58:57Z) - Robust Saliency-Aware Distillation for Few-shot Fine-grained Visual
Recognition [57.08108545219043]
Recognizing novel sub-categories with scarce samples is an essential and challenging research topic in computer vision.
Existing literature addresses this challenge by employing local-based representation approaches.
This article proposes a novel model, Robust Saliency-aware Distillation (RSaD), for few-shot fine-grained visual recognition.
arXiv Detail & Related papers (2023-05-12T00:13:17Z) - Occluded Person Re-Identification via Relational Adaptive Feature
Correction Learning [8.015703163954639]
Occluded person re-identification (Re-ID) in images captured by multiple cameras is challenging because the target person is occluded by pedestrians or objects.
Most existing methods utilize the off-the-shelf pose or parsing networks as pseudo labels, which are prone to error.
We propose a novel Occlusion Correction Network (OCNet) that corrects features through relational-weight learning and obtains diverse and representative features without using external networks.
arXiv Detail & Related papers (2022-12-09T07:48:47Z) - Body Part-Based Representation Learning for Occluded Person
Re-Identification [102.27216744301356]
Occluded person re-identification (ReID) is a person retrieval task which aims at matching occluded person images with holistic ones.
Part-based methods have been shown beneficial as they offer fine-grained information and are well suited to represent partially visible human bodies.
We propose BPBreID, a body part-based ReID model for solving the above issues.
arXiv Detail & Related papers (2022-11-07T16:48:41Z) - Entity-Conditioned Question Generation for Robust Attention Distribution
in Neural Information Retrieval [51.53892300802014]
We show that supervised neural information retrieval models are prone to learning sparse attention patterns over passage tokens.
Using a novel targeted synthetic data generation method, we teach neural IR to attend more uniformly and robustly to all entities in a given passage.
arXiv Detail & Related papers (2022-04-24T22:36:48Z) - Quality-aware Part Models for Occluded Person Re-identification [77.24920810798505]
Occlusion poses a major challenge for person re-identification (ReID)
Existing approaches typically rely on outside tools to infer visible body parts, which may be suboptimal in terms of both computational efficiency and ReID accuracy.
We propose a novel method named Quality-aware Part Models (QPM) for occlusion-robust ReID.
arXiv Detail & Related papers (2022-01-01T03:51:09Z) - Holistic Guidance for Occluded Person Re-Identification [7.662745552551165]
In real-world video surveillance applications, person re-identification (ReID) suffers from the effects of occlusions and detection errors.
We introduce a novel Holistic Guidance (HG) method that relies only on person identity labels.
Our proposed student-teacher framework is trained to address the problem by matching the distributions of between- and within-class distances (DCDs) of occluded samples with that of holistic (non-occluded) samples.
In addition to this, a joint generative-discriminative backbone is trained with a denoising autoencoder, allowing the system to
arXiv Detail & Related papers (2021-04-13T21:50:29Z) - Adaptive Deep Metric Embeddings for Person Re-Identification under
Occlusions [17.911512103472727]
We propose a novel person ReID method, which learns the spatial dependencies between the local regions and extracts the discriminative feature representation of the pedestrian image based on Long Short-Term Memory (LSTM)
The proposed loss enables the deep neural network to adaptively learn discriminative metric embeddings, which significantly improve the capability of recognizing unseen person identities.
arXiv Detail & Related papers (2020-02-07T03:18:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.