Clothes-Changing Person Re-identification Based On Skeleton Dynamics
- URL: http://arxiv.org/abs/2503.10759v1
- Date: Thu, 13 Mar 2025 18:00:02 GMT
- Title: Clothes-Changing Person Re-identification Based On Skeleton Dynamics
- Authors: Asaf Joseph, Shmuel Peleg,
- Abstract summary: Clothes-Changing ReID aims to recognize the same individual across different videos captured at various times and locations.<n>Traditional ReID methods often depend on appearance features, leading to decreased accuracy when clothing changes.<n>We propose a Clothes-Changing ReID method that uses only skeleton data and does not use appearance features.
- Score: 3.79830302036482
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Clothes-Changing Person Re-Identification (ReID) aims to recognize the same individual across different videos captured at various times and locations. This task is particularly challenging due to changes in appearance, such as clothing, hairstyle, and accessories. We propose a Clothes-Changing ReID method that uses only skeleton data and does not use appearance features. Traditional ReID methods often depend on appearance features, leading to decreased accuracy when clothing changes. Our approach utilizes a spatio-temporal Graph Convolution Network (GCN) encoder to generate a skeleton-based descriptor for each individual. During testing, we improve accuracy by aggregating predictions from multiple segments of a video clip. Evaluated on the CCVID dataset with several different pose estimation models, our method achieves state-of-the-art performance, offering a robust and efficient solution for Clothes-Changing ReID.
Related papers
- Synergy and Diversity in CLIP: Enhancing Performance Through Adaptive Backbone Ensembling [58.50618448027103]
Contrastive Language-Image Pretraining (CLIP) stands out as a prominent method for image representation learning.<n>This paper explores the differences across various CLIP-trained vision backbones.<n>Method achieves a remarkable increase in accuracy of up to 39.1% over the best single backbone.
arXiv Detail & Related papers (2024-05-27T12:59:35Z) - CCPA: Long-term Person Re-Identification via Contrastive Clothing and
Pose Augmentation [2.1756081703276]
Long-term Person Re-Identification aims at matching an individual across cameras after a long period of time.
We propose CCPA: Contrastive Clothing and Pose Augmentation framework for LRe-ID.
arXiv Detail & Related papers (2024-02-22T11:16:34Z) - Attention-based Shape and Gait Representations Learning for Video-based
Cloth-Changing Person Re-Identification [1.6385815610837167]
We deal with the practical problem of Video-based Cloth-Changing Person Re-ID (VCCRe-ID) by proposing "Attention-based Shape and Gait Representations Learning" (ASGL)
Our ASGL framework improves Re-ID performance under clothing variations by learning clothing-invariant gait cues.
Our proposed ST-GAT comprises multi-head attention modules, which are able to enhance the robustness of gait embeddings.
arXiv Detail & Related papers (2024-02-06T05:11:46Z) - PGDS: Pose-Guidance Deep Supervision for Mitigating Clothes-Changing in Person Re-Identification [10.140070649542949]
Person Re-Identification (Re-ID) task seeks to enhance the tracking of multiple individuals by surveillance cameras.
One of the most significant challenges faced in Re-ID is clothes-changing, where the same person may appear in different outfits.
We propose the Pose-Guidance Deep Supervision (PGDS), an effective framework for learning pose guidance within the Re-ID task.
arXiv Detail & Related papers (2023-12-09T18:43:05Z) - SkeleTR: Towrads Skeleton-based Action Recognition in the Wild [86.03082891242698]
SkeleTR is a new framework for skeleton-based action recognition.
It first models the intra-person skeleton dynamics for each skeleton sequence with graph convolutions.
It then uses stacked Transformer encoders to capture person interactions that are important for action recognition in general scenarios.
arXiv Detail & Related papers (2023-09-20T16:22:33Z) - GEFF: Improving Any Clothes-Changing Person ReID Model using Gallery
Enrichment with Face Features [11.189236254478057]
In Clothes-Changing Re-Identification (CC-ReID) problem, given a query sample of a person, the goal is to determine the correct identity based on a labeled gallery in which the person appears in different clothes.
Several models tackle this challenge by extracting clothes-independent features.
As clothing-related features are often dominant features in the data, we propose a new process we call Gallery Enrichment.
arXiv Detail & Related papers (2022-11-24T21:41:52Z) - A Benchmark of Video-Based Clothes-Changing Person Re-Identification [20.010401795892125]
We study the relatively new yet practical problem of clothes-changing video-based person re-identification (CCVReID)
We develop a two-branch confidence-aware re-ranking framework for handling the CCVReID problem.
We build two new benchmark datasets for CCVReID problem.
arXiv Detail & Related papers (2022-11-21T03:38:18Z) - Pose-Aided Video-based Person Re-Identification via Recurrent Graph
Convolutional Network [41.861537712563816]
We propose to learn the discriminative pose feature beyond the appearance feature for video retrieval.
To learn the pose feature, we first detect the pedestrian pose in each frame through an off-the-shelf pose detector.
We then exploit a recurrent graph convolutional network (RGCN) to learn the node embeddings of the temporal pose graph.
arXiv Detail & Related papers (2022-09-23T13:20:33Z) - Towards a Deeper Understanding of Skeleton-based Gait Recognition [4.812321790984493]
In recent years, most gait recognition methods used the person's silhouette to extract the gait features.
Model-based methods do not suffer from these problems and are able to represent the temporal motion of body joints.
In this work, we propose an approach based on Graph Convolutional Networks (GCNs) that combines higher-order inputs, and residual networks.
arXiv Detail & Related papers (2022-04-16T18:23:37Z) - Clothes-Changing Person Re-identification with RGB Modality Only [102.44387094119165]
We propose a Clothes-based Adrial Loss (CAL) to mine clothes-irrelevant features from the original RGB images.
Videos contain richer appearance and additional temporal information, which can be used to model propertemporal patterns.
arXiv Detail & Related papers (2022-04-14T11:38:28Z) - Cloth-Changing Person Re-identification from A Single Image with Gait
Prediction and Regularization [65.50321170655225]
We introduce Gait recognition as an auxiliary task to drive the Image ReID model to learn cloth-agnostic representations.
Experiments on image-based Cloth-Changing ReID benchmarks, e.g., LTCC, PRCC, Real28, and VC-Clothes, demonstrate that GI-ReID performs favorably against the state-of-the-arts.
arXiv Detail & Related papers (2021-03-29T12:10:50Z) - Apparel-invariant Feature Learning for Apparel-changed Person
Re-identification [70.16040194572406]
Most public ReID datasets are collected in a short time window in which persons' appearance rarely changes.
In real-world applications such as in a shopping mall, the same person's clothing may change, and different persons may wearing similar clothes.
It is critical to learn an apparel-invariant person representation under cases like cloth changing or several persons wearing similar clothes.
arXiv Detail & Related papers (2020-08-14T03:49:14Z) - Long-Term Cloth-Changing Person Re-identification [154.57752691285046]
Person re-identification (Re-ID) aims to match a target person across camera views at different locations and times.
Existing Re-ID studies focus on the short-term cloth-consistent setting, under which a person re-appears in different camera views with the same outfit.
In this work, we focus on a much more difficult yet practical setting where person matching is conducted over long-duration, e.g., over days and months.
arXiv Detail & Related papers (2020-05-26T11:27:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.