Learning Instance-level Spatial-Temporal Patterns for Person
Re-identification
- URL: http://arxiv.org/abs/2108.00171v1
- Date: Sat, 31 Jul 2021 07:44:47 GMT
- Title: Learning Instance-level Spatial-Temporal Patterns for Person
Re-identification
- Authors: Min Ren and Lingxiao He and Xingyu Liao and Wu Liu and Yunlong Wang
and Tieniu Tan
- Abstract summary: We propose a novel Instance-level and Spatial-temporal Disentangled Re-ID method (InSTD) to improve Re-ID accuracy.
In our proposed framework, personalized information such as moving direction is explicitly considered to further narrow down the search space.
The proposed method achieves mAP of 90.8% on Market-1501 and 89.1% on DukeMTMC-reID, improving from the baseline 82.2% and 72.7%, respectively.
- Score: 80.43222559182072
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Person re-identification (Re-ID) aims to match pedestrians under dis-joint
cameras. Most Re-ID methods formulate it as visual representation learning and
image search, and its accuracy is consequently affected greatly by the search
space. Spatial-temporal information has been proven to be efficient to filter
irrelevant negative samples and significantly improve Re-ID accuracy. However,
existing spatial-temporal person Re-ID methods are still rough and do not
exploit spatial-temporal information sufficiently. In this paper, we propose a
novel Instance-level and Spatial-Temporal Disentangled Re-ID method (InSTD), to
improve Re-ID accuracy. In our proposed framework, personalized information
such as moving direction is explicitly considered to further narrow down the
search space. Besides, the spatial-temporal transferring probability is
disentangled from joint distribution to marginal distribution, so that outliers
can also be well modeled. Abundant experimental analyses are presented, which
demonstrates the superiority and provides more insights into our method. The
proposed method achieves mAP of 90.8% on Market-1501 and 89.1% on
DukeMTMC-reID, improving from the baseline 82.2% and 72.7%, respectively.
Besides, in order to provide a better benchmark for person re-identification,
we release a cleaned data list of DukeMTMC-reID with this paper:
https://github.com/RenMin1991/cleaned-DukeMTMC-reID/
Related papers
- A High-Accuracy Unsupervised Person Re-identification Method Using
Auxiliary Information Mined from Datasets [53.047542904329866]
We make use of auxiliary information mined from datasets for multi-modal feature learning.
This paper proposes three effective training tricks, including Restricted Label Smoothing Cross Entropy Loss (RLSCE), Weight Adaptive Triplet Loss (WATL) and Dynamic Training Iterations (DTI)
arXiv Detail & Related papers (2022-05-06T10:16:18Z) - A Free Lunch to Person Re-identification: Learning from Automatically
Generated Noisy Tracklets [52.30547023041587]
unsupervised video-based re-identification (re-ID) methods have been proposed to solve the problem of high labor cost required to annotate re-ID datasets.
But their performance is still far lower than the supervised counterparts.
In this paper, we propose to tackle this problem by learning re-ID models from automatically generated person tracklets.
arXiv Detail & Related papers (2022-04-02T16:18:13Z) - No Shifted Augmentations (NSA): compact distributions for robust
self-supervised Anomaly Detection [4.243926243206826]
Unsupervised Anomaly detection (AD) requires building a notion of normalcy, distinguishing in-distribution (ID) and out-of-distribution (OOD) data.
We investigate how the emph geometrical compactness of the ID feature distribution makes isolating and detecting outliers easier.
We propose novel architectural modifications to the self-supervised feature learning step, that enable such compact distributions for ID data to be learned.
arXiv Detail & Related papers (2022-03-19T15:55:32Z) - Video-based Person Re-identification without Bells and Whistles [49.51670583977911]
Video-based person re-identification (Re-ID) aims at matching the video tracklets with cropped video frames for identifying the pedestrians under different cameras.
There exists severe spatial and temporal misalignment for those cropped tracklets due to the imperfect detection and tracking results generated with obsolete methods.
We present a simple re-Detect and Link (DL) module which can effectively reduce those unexpected noise through applying the deep learning-based detection and tracking on the cropped tracklets.
arXiv Detail & Related papers (2021-05-22T10:17:38Z) - ES-Net: Erasing Salient Parts to Learn More in Re-Identification [46.624740579314924]
We propose a novel network, Erasing-Salient Net (ES-Net), to learn comprehensive features by erasing the salient areas in an image.
Our ES-Net outperforms state-of-the-art methods on three Person re-ID benchmarks and two Vehicle re-ID benchmarks.
arXiv Detail & Related papers (2021-03-10T08:19:46Z) - Person Re-identification based on Robust Features in Open-world [0.0]
We propose a low-cost and high-efficiency method to solve shortcomings of the existing re-ID research.
Our approach based on pose estimation model improved by group convolution to obtain the continuous key points of pedestrian.
Our method achieves Rank-1: 60.9%, Rank-5: 78.1%, and mAP: 49.2% on this dataset, which exceeds most existing state-of-art re-ID models.
arXiv Detail & Related papers (2021-02-22T06:49:28Z) - Unsupervised Noisy Tracklet Person Re-identification [100.85530419892333]
We present a novel selective tracklet learning (STL) approach that can train discriminative person re-id models from unlabelled tracklet data.
This avoids the tedious and costly process of exhaustively labelling person image/tracklet true matching pairs across camera views.
Our method is particularly more robust against arbitrary noisy data of raw tracklets therefore scalable to learning discriminative models from unconstrained tracking data.
arXiv Detail & Related papers (2021-01-16T07:31:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.