CHIRLA: Comprehensive High-resolution Identification and Re-identification for Large-scale Analysis
- URL: http://arxiv.org/abs/2502.06681v2
- Date: Mon, 08 Sep 2025 15:01:10 GMT
- Title: CHIRLA: Comprehensive High-resolution Identification and Re-identification for Large-scale Analysis
- Authors: Bessie Dominguez-Dager, Felix Escalona, Francisco Gomez-Donoso, Miguel Cazorla,
- Abstract summary: We present CHIRLA, Comprehensive High-resolution Identification and Re-identification for Large-scale Analysis.<n>The dataset includes 22 individuals, more than five hours of video, and about 1M bounding boxes with identity annotations.<n>We also define benchmark protocols for person tracking and Re-ID, covering diverse and challenging scenarios.
- Score: 1.6914110481876652
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Person re-identification (Re-ID) is a key challenge in computer vision, requiring the matching of individuals across cameras, locations, and time. While most research focuses on short-term scenarios with minimal appearance changes, real-world applications demand robust systems that handle long-term variations caused by clothing and physical changes. We present CHIRLA, Comprehensive High-resolution Identification and Re-identification for Large-scale Analysis, a novel dataset designed for video-based long-term person Re-ID. CHIRLA was recorded over seven months in four connected indoor environments using seven strategically placed cameras, capturing realistic movements with substantial clothing and appearance variability. The dataset includes 22 individuals, more than five hours of video, and about 1M bounding boxes with identity annotations obtained through semi-automatic labeling. We also define benchmark protocols for person tracking and Re-ID, covering diverse and challenging scenarios such as occlusion, reappearance, and multi-camera conditions. By introducing this comprehensive benchmark, we aim to facilitate the development and evaluation of Re-ID algorithms that can reliably perform in challenging, long-term real-world scenarios. The benchmark code is publicly available at: https://github.com/bdager/CHIRLA.
Related papers
- Towards Anytime Retrieval: A Benchmark for Anytime Person Re-Identification [85.78039373517021]
Anytime Person Re-identification (AT-ReID) aims to achieve effective retrieval in multiple scenarios based on variations in time.<n>We collect the first large-scale dataset, AT-USTC, which contains 403k images of individuals wearing multiple clothes.<n>We propose a unified model named Uni-AT, which comprises a multi-scenario ReID framework for scenario-specific features learning.
arXiv Detail & Related papers (2025-09-20T11:20:22Z) - CFReID: Continual Few-shot Person Re-Identification [130.5656289348812]
Lifelong ReID has been proposed to learn and accumulate knowledge across multiple domains incrementally.
LReID models need to be trained on large-scale labeled data for each unseen domain, which are typically inaccessible due to privacy and cost concerns.
We propose Continual Few-shot ReID, which requires models to be incrementally trained using few-shot data and tested on all seen domains.
arXiv Detail & Related papers (2025-03-24T09:17:05Z) - Multi-modal Multi-platform Person Re-Identification: Benchmark and Method [58.59888754340054]
MP-ReID is a novel dataset designed specifically for multi-modality and multi-platform ReID.
This benchmark compiles data from 1,930 identities across diverse modalities, including RGB, infrared, and thermal imaging.
We introduce Uni-Prompt ReID, a framework with specific-designed prompts, tailored for cross-modality and cross-platform scenarios.
arXiv Detail & Related papers (2025-03-21T12:27:49Z) - Unconstrained Body Recognition at Altitude and Range: Comparing Four Approaches [0.0]
We focus on learning persistent body shape characteristics that remain stable over time.
We introduce a body identification model based on a Vision Transformer (ViT) and on a Swin-ViT model.
All models are trained on a large and diverse dataset of over 1.9 million images of approximately 5k identities across 9 databases.
arXiv Detail & Related papers (2025-02-10T23:49:06Z) - Towards Global Localization using Multi-Modal Object-Instance Re-Identification [23.764646800085977]
We propose a novel re-identification transformer architecture that integrates multimodal RGB and depth information.
We demonstrate improvements in ReID across scenes that are cluttered or have varying illumination conditions.
We also develop a ReID-based localization framework that enables accurate camera localization and pose identification across different viewpoints.
arXiv Detail & Related papers (2024-09-18T14:15:10Z) - Keypoint Promptable Re-Identification [76.31113049256375]
Occluded Person Re-Identification (ReID) is a metric learning task that involves matching occluded individuals based on their appearance.
We introduce Keypoint Promptable ReID (KPR), a novel formulation of the ReID problem that explicitly complements the input bounding box with a set of semantic keypoints.
We release custom keypoint labels for four popular ReID benchmarks. Experiments on person retrieval, but also on pose tracking, demonstrate that our method systematically surpasses previous state-of-the-art approaches.
arXiv Detail & Related papers (2024-07-25T15:20:58Z) - ENTIRe-ID: An Extensive and Diverse Dataset for Person Re-Identification [0.46040036610482665]
ENTIRe-ID dataset comprises over 4.45 million images from 37 different cameras in varied environments.
This dataset is uniquely designed to tackle the challenges of domain variability and model generalization.
This design ensures a realistic and robust training platform for ReID models.
arXiv Detail & Related papers (2024-05-30T20:26:47Z) - ConsistentID: Portrait Generation with Multimodal Fine-Grained Identity Preserving [64.90148669690228]
ConsistentID is an innovative method crafted for diverseidentity-preserving portrait generation under fine-grained multimodal facial prompts.<n>We present a fine-grained portrait dataset, FGID, with over 500,000 facial images, offering greater diversity and comprehensiveness than existing public facial datasets.
arXiv Detail & Related papers (2024-04-25T17:23:43Z) - An Open-World, Diverse, Cross-Spatial-Temporal Benchmark for Dynamic Wild Person Re-Identification [58.5877965612088]
Person re-identification (ReID) has made great strides thanks to the data-driven deep learning techniques.
The existing benchmark datasets lack diversity, and models trained on these data cannot generalize well to dynamic wild scenarios.
We develop a new Open-World, Diverse, Cross-Spatial-Temporal dataset named OWD with several distinct features.
arXiv Detail & Related papers (2024-03-22T11:21:51Z) - Transformer for Object Re-Identification: A Survey [69.61542572894263]
Vision Transformers have spurred a growing number of studies delving deeper into Transformer-based Re-ID.
This paper provides a comprehensive review and in-depth analysis of the Transformer-based Re-ID.
Considering the trending unsupervised Re-ID, we propose a new Transformer baseline, UntransReID, achieving state-of-the-art performance.
arXiv Detail & Related papers (2024-01-13T03:17:57Z) - Benchmarking person re-identification datasets and approaches for
practical real-world implementations [1.0079626733116613]
Person Re-Identification (Re-ID) has received a lot of attention.
However, when such Re-ID models are deployed in new cities or environments, the task of searching for people within a network of security cameras is likely to face an important domain shift.
This paper introduces a complete methodology to evaluate Re-ID approaches and training datasets with respect to their suitability for unsupervised deployment for live operations.
arXiv Detail & Related papers (2022-12-20T03:45:38Z) - Comparison of Data Representations and Machine Learning Architectures
for User Identification on Arbitrary Motion Sequences [8.967985264567217]
This paper compares different machine learning approaches to identify users based on arbitrary sequences of head and hand movements.
We publish all our code to allow and to provide baselines for future work.
The model correctly identifies any of the 34 subjects with an accuracy of 100% within 150 seconds.
arXiv Detail & Related papers (2022-10-02T14:12:10Z) - Semantic Consistency and Identity Mapping Multi-Component Generative
Adversarial Network for Person Re-Identification [39.605062525247135]
We propose a semantic consistency and identity mapping multi-component generative adversarial network (SC-IMGAN) which provides style adaptation from one to many domains.
Our proposed method outperforms state-of-the-art techniques on six challenging person Re-ID datasets.
arXiv Detail & Related papers (2021-04-28T14:12:29Z) - Person Re-identification based on Robust Features in Open-world [0.0]
We propose a low-cost and high-efficiency method to solve shortcomings of the existing re-ID research.
Our approach based on pose estimation model improved by group convolution to obtain the continuous key points of pedestrian.
Our method achieves Rank-1: 60.9%, Rank-5: 78.1%, and mAP: 49.2% on this dataset, which exceeds most existing state-of-art re-ID models.
arXiv Detail & Related papers (2021-02-22T06:49:28Z) - Unsupervised Domain Adaptation in Person re-ID via k-Reciprocal
Clustering and Large-Scale Heterogeneous Environment Synthesis [76.46004354572956]
We introduce an unsupervised domain adaptation approach for person re-identification.
Experimental results show that the proposed ktCUDA and SHRED approach achieves an average improvement of +5.7 mAP in re-identification performance.
arXiv Detail & Related papers (2020-01-14T17:43:52Z) - Deep Learning for Person Re-identification: A Survey and Outlook [233.36948173686602]
Person re-identification (Re-ID) aims at retrieving a person of interest across multiple non-overlapping cameras.
By dissecting the involved components in developing a person Re-ID system, we categorize it into the closed-world and open-world settings.
arXiv Detail & Related papers (2020-01-13T12:49:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.