Stronger Baseline for Person Re-Identification
- URL: http://arxiv.org/abs/2112.01059v1
- Date: Thu, 2 Dec 2021 08:50:03 GMT
- Title: Stronger Baseline for Person Re-Identification
- Authors: Fengliang Qi, Bo Yan, Leilei Cao and Hongbin Wang
- Abstract summary: Person re-identification (re-ID) aims to identify the same person of interest across non-overlapping capturing cameras.
We propose a Stronger Baseline for person re-ID, an enhancement version of the current prevailing method, namely, Strong Baseline.
- Score: 6.087398773657721
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Person re-identification (re-ID) aims to identify the same person of interest
across non-overlapping capturing cameras, which plays an important role in
visual surveillance applications and computer vision research areas. Fitting a
robust appearance-based representation extractor with limited collected
training data is crucial for person re-ID due to the high expanse of annotating
the identity of unlabeled data. In this work, we propose a Stronger Baseline
for person re-ID, an enhancement version of the current prevailing method,
namely, Strong Baseline, with tiny modifications but a faster convergence rate
and higher recognition performance. With the aid of Stronger Baseline, we
obtained the third place (i.e., 0.94 in mAP) in 2021 VIPriors Re-identification
Challenge without the auxiliary of ImageNet-based pre-trained parameter
initialization and any extra supplemental dataset.
Related papers
- Following the Clues: Experiments on Person Re-ID using Cross-Modal Intelligence [0.0]
CRID is a cross-modal framework combining Large Vision-Language Models, Graph Attention Networks, and representation learning.<n>Our approach focuses on identifying and leveraging interpretable features, enabling the detection of semantically meaningful PII beyond low-level appearance cues.<n>Our experiments show improved performance in practical cross-dataset Re-ID scenarios.
arXiv Detail & Related papers (2025-07-02T09:10:33Z) - Synthesizing Efficient Data with Diffusion Models for Person Re-Identification Pre-Training [51.87027943520492]
We present a novel paradigm Diffusion-ReID to efficiently augment and generate diverse images based on known identities.
Benefiting from our proposed paradigm, we first create a new large-scale person Re-ID dataset Diff-Person, which consists of over 777K images from 5,183 identities.
arXiv Detail & Related papers (2024-06-10T06:26:03Z) - ID-Aligner: Enhancing Identity-Preserving Text-to-Image Generation with Reward Feedback Learning [57.91881829308395]
Identity-preserving text-to-image generation (ID-T2I) has received significant attention due to its wide range of application scenarios like AI portrait and advertising.
We present textbfID-Aligner, a general feedback learning framework to enhance ID-T2I performance.
arXiv Detail & Related papers (2024-04-23T18:41:56Z) - Learning Invariance from Generated Variance for Unsupervised Person
Re-identification [15.096776375794356]
We propose to replace traditional data augmentation with a generative adversarial network (GAN)
A 3D mesh guided person image generator is proposed to disentangle a person image into id-related and id-unrelated features.
By jointly training the generative and the contrastive modules, our method achieves new state-of-the-art unsupervised person ReID performance on mainstream large-scale benchmarks.
arXiv Detail & Related papers (2023-01-02T15:40:14Z) - Domain Adaptive Egocentric Person Re-identification [10.199631830749839]
Person re-identification (re-ID) in first-person (egocentric) vision is a fairly new and unexplored problem.
With the increase of wearable video recording devices, egocentric data becomes readily available.
There is a significant lack of large scale structured egocentric datasets for person re-identification.
arXiv Detail & Related papers (2021-03-08T16:19:32Z) - Unsupervised Pre-training for Person Re-identification [90.98552221699508]
We present a large scale unlabeled person re-identification (Re-ID) dataset "LUPerson"
We make the first attempt of performing unsupervised pre-training for improving the generalization ability of the learned person Re-ID feature representation.
arXiv Detail & Related papers (2020-12-07T14:48:26Z) - Attribute-aware Identity-hard Triplet Loss for Video-based Person
Re-identification [51.110453988705395]
Video-based person re-identification (Re-ID) is an important computer vision task.
We introduce a new metric learning method called Attribute-aware Identity-hard Triplet Loss (AITL)
To achieve a complete model of video-based person Re-ID, a multi-task framework with Attribute-driven Spatio-Temporal Attention (ASTA) mechanism is also proposed.
arXiv Detail & Related papers (2020-06-13T09:15:38Z) - Intra-Camera Supervised Person Re-Identification [87.88852321309433]
We propose a novel person re-identification paradigm based on an idea of independent per-camera identity annotation.
This eliminates the most time-consuming and tedious inter-camera identity labelling process.
We formulate a Multi-tAsk mulTi-labEl (MATE) deep learning method for Intra-Camera Supervised (ICS) person re-id.
arXiv Detail & Related papers (2020-02-12T15:26:33Z) - Towards Precise Intra-camera Supervised Person Re-identification [54.86892428155225]
Intra-camera supervision (ICS) for person re-identification (Re-ID) assumes that identity labels are independently annotated within each camera view.
Lack of inter-camera labels makes the ICS Re-ID problem much more challenging than the fully supervised counterpart.
Our approach performs even comparable to state-of-the-art fully supervised methods in two of the datasets.
arXiv Detail & Related papers (2020-02-12T11:56:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.