Towards Anytime Retrieval: A Benchmark for Anytime Person Re-Identification
- URL: http://arxiv.org/abs/2509.16635v1
- Date: Sat, 20 Sep 2025 11:20:22 GMT
- Title: Towards Anytime Retrieval: A Benchmark for Anytime Person Re-Identification
- Authors: Xulin Li, Yan Lu, Bin Liu, Jiaze Li, Qinhong Yang, Tao Gong, Qi Chu, Mang Ye, Nenghai Yu,
- Abstract summary: Anytime Person Re-identification (AT-ReID) aims to achieve effective retrieval in multiple scenarios based on variations in time.<n>We collect the first large-scale dataset, AT-USTC, which contains 403k images of individuals wearing multiple clothes.<n>We propose a unified model named Uni-AT, which comprises a multi-scenario ReID framework for scenario-specific features learning.
- Score: 85.78039373517021
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In real applications, person re-identification (ReID) is expected to retrieve the target person at any time, including both daytime and nighttime, ranging from short-term to long-term. However, existing ReID tasks and datasets can not meet this requirement, as they are constrained by available time and only provide training and evaluation for specific scenarios. Therefore, we investigate a new task called Anytime Person Re-identification (AT-ReID), which aims to achieve effective retrieval in multiple scenarios based on variations in time. To address the AT-ReID problem, we collect the first large-scale dataset, AT-USTC, which contains 403k images of individuals wearing multiple clothes captured by RGB and IR cameras. Our data collection spans 21 months, and 270 volunteers were photographed on average 29.1 times across different dates or scenes, 4-15 times more than current datasets, providing conditions for follow-up investigations in AT-ReID. Further, to tackle the new challenge of multi-scenario retrieval, we propose a unified model named Uni-AT, which comprises a multi-scenario ReID (MS-ReID) framework for scenario-specific features learning, a Mixture-of-Attribute-Experts (MoAE) module to alleviate inter-scenario interference, and a Hierarchical Dynamic Weighting (HDW) strategy to ensure balanced training across all scenarios. Extensive experiments show that our model leads to satisfactory results and exhibits excellent generalization to all scenarios.
Related papers
- Towards Multimodal Lifelong Understanding: A Dataset and Agentic Baseline [58.585692088008905]
MM-Lifelong is a dataset designed for Multimodal Lifelong Understanding.<n>Comprising 181.1 hours of footage, it is structured across Day, Week, and Month scales to capture varying temporal densities.
arXiv Detail & Related papers (2026-03-05T18:52:12Z) - PS-ReID: Advancing Person Re-Identification and Precise Segmentation with Multimodal Retrieval [38.530536338075684]
Person re-identification (ReID) plays a critical role in applications such as security surveillance and criminal investigations.<n>We propose bf PS-ReID, a multimodal model that combines image and text inputs to enhance ReID performance.<n> Experimental results demonstrate that PS-ReID significantly outperforms unimodal query-based models in both ReID and segmentation tasks.
arXiv Detail & Related papers (2025-03-27T15:14:03Z) - CFReID: Continual Few-shot Person Re-Identification [130.5656289348812]
Lifelong ReID has been proposed to learn and accumulate knowledge across multiple domains incrementally.<n>LReID models need to be trained on large-scale labeled data for each unseen domain, which are typically inaccessible due to privacy and cost concerns.<n>We propose Continual Few-shot ReID, which requires models to be incrementally trained using few-shot data and tested on all seen domains.
arXiv Detail & Related papers (2025-03-24T09:17:05Z) - CHIRLA: Comprehensive High-resolution Identification and Re-identification for Large-scale Analysis [1.6914110481876652]
We present CHIRLA, Comprehensive High-resolution Identification and Re-identification for Large-scale Analysis.<n>The dataset includes 22 individuals, more than five hours of video, and about 1M bounding boxes with identity annotations.<n>We also define benchmark protocols for person tracking and Re-ID, covering diverse and challenging scenarios.
arXiv Detail & Related papers (2025-02-10T17:07:43Z) - Instruct-ReID++: Towards Universal Purpose Instruction-Guided Person Re-identification [62.894790379098005]
We propose a novel instruct-ReID task that requires the model to retrieve images according to the given image or language instructions.<n>Instruct-ReID is the first exploration of a general ReID setting, where existing 6 ReID tasks can be viewed as special cases by assigning different instructions.<n>We propose a novel baseline model, IRM, with an adaptive triplet loss to handle various retrieval tasks within a unified framework.
arXiv Detail & Related papers (2024-05-28T03:35:46Z) - Lifelong Unsupervised Domain Adaptive Person Re-identification with
Coordinated Anti-forgetting and Adaptation [127.6168183074427]
We propose a new task, Lifelong Unsupervised Domain Adaptive (LUDA) person ReID.
This is challenging because it requires the model to continuously adapt to unlabeled data of the target environments.
We design an effective scheme for this task, dubbed CLUDA-ReID, where the anti-forgetting is harmoniously coordinated with the adaptation.
arXiv Detail & Related papers (2021-12-13T13:19:45Z) - Unsupervised Pre-training for Person Re-identification [90.98552221699508]
We present a large scale unlabeled person re-identification (Re-ID) dataset "LUPerson"
We make the first attempt of performing unsupervised pre-training for improving the generalization ability of the learned person Re-ID feature representation.
arXiv Detail & Related papers (2020-12-07T14:48:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.