Person Re-identification based on Robust Features in Open-world
- URL: http://arxiv.org/abs/2102.10798v1
- Date: Mon, 22 Feb 2021 06:49:28 GMT
- Title: Person Re-identification based on Robust Features in Open-world
- Authors: Yaguan Qian and Anlin Sun
- Abstract summary: We propose a low-cost and high-efficiency method to solve shortcomings of the existing re-ID research.
Our approach based on pose estimation model improved by group convolution to obtain the continuous key points of pedestrian.
Our method achieves Rank-1: 60.9%, Rank-5: 78.1%, and mAP: 49.2% on this dataset, which exceeds most existing state-of-art re-ID models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning technology promotes the rapid development of person
re-identifica-tion (re-ID). However, some challenges are still existing in the
open-world. First, the existing re-ID research usually assumes only one factor
variable (view, clothing, pedestrian pose, pedestrian occlusion, image
resolution, RGB/IR modality) changing, ignoring the complexity of multi-factor
variables in the open-world. Second, the existing re-ID methods are over depend
on clothing color and other apparent features of pedestrian, which are easily
disguised or changed. In addition, the lack of benchmark datasets containing
multi-factor variables is also hindering the practically application of re-ID
in the open-world. In this paper, we propose a low-cost and high-efficiency
method to solve shortcomings of the existing re-ID research, such as unreliable
feature selection, low efficiency of feature extraction, single research
variable, etc. Our approach based on pose estimation model improved by group
convolution to obtain the continuous key points of pedestrian, and utilize
dynamic time warping (DTW) to measure the similarity of features between
different pedestrians. At the same time, to verify the effectiveness of our
method, we provide a miniature dataset which is closer to the real world and
includes pedestrian changing clothes and cross-modality factor variables
fusion. Extensive experiments are conducted and the results show that our
method achieves Rank-1: 60.9%, Rank-5: 78.1%, and mAP: 49.2% on this dataset,
which exceeds most existing state-of-art re-ID models.
Related papers
- Dynamic Identity-Guided Attention Network for Visible-Infrared Person Re-identification [17.285526655788274]
Visible-infrared person re-identification (VI-ReID) aims to match people with the same identity between visible and infrared modalities.
Existing methods generally try to bridge the cross-modal differences at image or feature level.
We introduce a dynamic identity-guided attention network (DIAN) to mine identity-guided and modality-consistent embeddings.
arXiv Detail & Related papers (2024-05-21T12:04:56Z) - Robust Ensemble Person Re-Identification via Orthogonal Fusion with Occlusion Handling [4.431087385310259]
Occlusion remains one of the major challenges in person reidentification (ReID)
We propose a deep ensemble model that harnesses both CNN and Transformer architectures to generate robust feature representations.
arXiv Detail & Related papers (2024-03-29T18:38:59Z) - An Open-World, Diverse, Cross-Spatial-Temporal Benchmark for Dynamic Wild Person Re-Identification [58.5877965612088]
Person re-identification (ReID) has made great strides thanks to the data-driven deep learning techniques.
The existing benchmark datasets lack diversity, and models trained on these data cannot generalize well to dynamic wild scenarios.
We develop a new Open-World, Diverse, Cross-Spatial-Temporal dataset named OWD with several distinct features.
arXiv Detail & Related papers (2024-03-22T11:21:51Z) - Learning Feature Recovery Transformer for Occluded Person
Re-identification [71.18476220969647]
We propose a new approach called Feature Recovery Transformer (FRT) to address the two challenges simultaneously.
To reduce the interference of the noise during feature matching, we mainly focus on visible regions that appear in both images and develop a visibility graph to calculate the similarity.
In terms of the second challenge, based on the developed graph similarity, for each query image, we propose a recovery transformer that exploits the feature sets of its $k$-nearest neighbors in the gallery to recover the complete features.
arXiv Detail & Related papers (2023-01-05T02:36:16Z) - Learning Progressive Modality-shared Transformers for Effective
Visible-Infrared Person Re-identification [27.75907274034702]
We propose a novel deep learning framework named Progressive Modality-shared Transformer (PMT) for effective VI-ReID.
To reduce the negative effect of modality gaps, we first take the gray-scale images as an auxiliary modality and propose a progressive learning strategy.
To cope with the problem of large intra-class differences and small inter-class differences, we propose a Discriminative Center Loss.
arXiv Detail & Related papers (2022-12-01T02:20:16Z) - On Exploring Pose Estimation as an Auxiliary Learning Task for
Visible-Infrared Person Re-identification [66.58450185833479]
In this paper, we exploit Pose Estimation as an auxiliary learning task to assist the VI-ReID task in an end-to-end framework.
By jointly training these two tasks in a mutually beneficial manner, our model learns higher quality modality-shared and ID-related features.
Experimental results on two benchmark VI-ReID datasets show that the proposed method consistently improves state-of-the-art methods by significant margins.
arXiv Detail & Related papers (2022-01-11T09:44:00Z) - Adversarial Deep Feature Extraction Network for User Independent Human
Activity Recognition [4.988898367111902]
We present an adversarial subject-independent feature extraction method with the maximum mean discrepancy (MMD) regularization for human activity recognition.
We evaluate the method on well-known public data sets showing that it significantly improves user-independent performance and reduces variance in results.
arXiv Detail & Related papers (2021-10-23T07:50:32Z) - Diverse Knowledge Distillation for End-to-End Person Search [81.4926655119318]
Person search aims to localize and identify a specific person from a gallery of images.
Recent methods can be categorized into two groups, i.e., two-step and end-to-end approaches.
We propose a simple yet strong end-to-end network with diverse knowledge distillation to break the bottleneck.
arXiv Detail & Related papers (2020-12-21T09:04:27Z) - Intra-Camera Supervised Person Re-Identification [87.88852321309433]
We propose a novel person re-identification paradigm based on an idea of independent per-camera identity annotation.
This eliminates the most time-consuming and tedious inter-camera identity labelling process.
We formulate a Multi-tAsk mulTi-labEl (MATE) deep learning method for Intra-Camera Supervised (ICS) person re-id.
arXiv Detail & Related papers (2020-02-12T15:26:33Z) - Deep Learning for Person Re-identification: A Survey and Outlook [233.36948173686602]
Person re-identification (Re-ID) aims at retrieving a person of interest across multiple non-overlapping cameras.
By dissecting the involved components in developing a person Re-ID system, we categorize it into the closed-world and open-world settings.
arXiv Detail & Related papers (2020-01-13T12:49:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.