Clothes-Changing Person Re-identification with RGB Modality Only
- URL: http://arxiv.org/abs/2204.06890v1
- Date: Thu, 14 Apr 2022 11:38:28 GMT
- Title: Clothes-Changing Person Re-identification with RGB Modality Only
- Authors: Xinqian Gu, Hong Chang, Bingpeng Ma, Shutao Bai, Shiguang Shan, Xilin
Chen
- Abstract summary: We propose a Clothes-based Adrial Loss (CAL) to mine clothes-irrelevant features from the original RGB images.
Videos contain richer appearance and additional temporal information, which can be used to model propertemporal patterns.
- Score: 102.44387094119165
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The key to address clothes-changing person re-identification (re-id) is to
extract clothes-irrelevant features, e.g., face, hairstyle, body shape, and
gait. Most current works mainly focus on modeling body shape from
multi-modality information (e.g., silhouettes and sketches), but do not make
full use of the clothes-irrelevant information in the original RGB images. In
this paper, we propose a Clothes-based Adversarial Loss (CAL) to mine
clothes-irrelevant features from the original RGB images by penalizing the
predictive power of re-id model w.r.t. clothes. Extensive experiments
demonstrate that using RGB images only, CAL outperforms all state-of-the-art
methods on widely-used clothes-changing person re-id benchmarks. Besides,
compared with images, videos contain richer appearance and additional temporal
information, which can be used to model proper spatiotemporal patterns to
assist clothes-changing re-id. Since there is no publicly available
clothes-changing video re-id dataset, we contribute a new dataset named CCVID
and show that there exists much room for improvement in modeling spatiotemporal
information. The code and new dataset are available at:
https://github.com/guxinqian/Simple-CCReID.
Related papers
- DLCR: A Generative Data Expansion Framework via Diffusion for Clothes-Changing Person Re-ID [69.70281727931048]
We propose a novel data expansion framework to generate diverse images of individuals in varied attire.
We generate additional data for five benchmark CC-ReID datasets.
We obtain a large top-1 accuracy improvement of $11.3%$ by training CAL, a previous state of the art (SOTA) method, with DLCR-generated data.
arXiv Detail & Related papers (2024-11-11T18:28:33Z) - Masked Attribute Description Embedding for Cloth-Changing Person Re-identification [66.53045140286987]
Cloth-changing person re-identification (CC-ReID) aims to match persons who change clothes over long periods.
The key challenge in CC-ReID is to extract clothing-independent features, such as face, hairstyle, body shape, and gait.
We propose a Masked Attribute Description Embedding (MADE) method that unifies personal visual appearance and attribute description for CC-ReID.
arXiv Detail & Related papers (2024-01-11T03:47:13Z) - GEFF: Improving Any Clothes-Changing Person ReID Model using Gallery
Enrichment with Face Features [11.189236254478057]
In Clothes-Changing Re-Identification (CC-ReID) problem, given a query sample of a person, the goal is to determine the correct identity based on a labeled gallery in which the person appears in different clothes.
Several models tackle this challenge by extracting clothes-independent features.
As clothing-related features are often dominant features in the data, we propose a new process we call Gallery Enrichment.
arXiv Detail & Related papers (2022-11-24T21:41:52Z) - Apparel-invariant Feature Learning for Apparel-changed Person
Re-identification [70.16040194572406]
Most public ReID datasets are collected in a short time window in which persons' appearance rarely changes.
In real-world applications such as in a shopping mall, the same person's clothing may change, and different persons may wearing similar clothes.
It is critical to learn an apparel-invariant person representation under cases like cloth changing or several persons wearing similar clothes.
arXiv Detail & Related papers (2020-08-14T03:49:14Z) - Long-Term Cloth-Changing Person Re-identification [154.57752691285046]
Person re-identification (Re-ID) aims to match a target person across camera views at different locations and times.
Existing Re-ID studies focus on the short-term cloth-consistent setting, under which a person re-appears in different camera views with the same outfit.
In this work, we focus on a much more difficult yet practical setting where person matching is conducted over long-duration, e.g., over days and months.
arXiv Detail & Related papers (2020-05-26T11:27:21Z) - BCNet: Learning Body and Cloth Shape from A Single Image [56.486796244320125]
We propose a layered garment representation on top of SMPL and novelly make the skinning weight of garment independent of the body mesh.
Compared with existing methods, our method can support more garment categories and recover more accurate geometry.
arXiv Detail & Related papers (2020-04-01T03:41:36Z) - Learning Shape Representations for Clothing Variations in Person
Re-Identification [34.559050607889816]
Person re-identification (re-ID) aims to recognize instances of the same person contained in multiple images taken across different cameras.
We propose a novel representation learning model which is able to generate a body shape feature representation without being affected by clothing color or patterns.
Case-Net learns a representation of identity that depends only on body shape via adversarial learning and feature disentanglement.
arXiv Detail & Related papers (2020-03-16T17:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.