Learning Shape Representations for Clothing Variations in Person
Re-Identification
- URL: http://arxiv.org/abs/2003.07340v1
- Date: Mon, 16 Mar 2020 17:23:50 GMT
- Title: Learning Shape Representations for Clothing Variations in Person
Re-Identification
- Authors: Yu-Jhe Li, Zhengyi Luo, Xinshuo Weng, Kris M. Kitani
- Abstract summary: Person re-identification (re-ID) aims to recognize instances of the same person contained in multiple images taken across different cameras.
We propose a novel representation learning model which is able to generate a body shape feature representation without being affected by clothing color or patterns.
Case-Net learns a representation of identity that depends only on body shape via adversarial learning and feature disentanglement.
- Score: 34.559050607889816
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Person re-identification (re-ID) aims to recognize instances of the same
person contained in multiple images taken across different cameras. Existing
methods for re-ID tend to rely heavily on the assumption that both query and
gallery images of the same person have the same clothing. Unfortunately, this
assumption may not hold for datasets captured over long periods of time (e.g.,
weeks, months or years). To tackle the re-ID problem in the context of clothing
changes, we propose a novel representation learning model which is able to
generate a body shape feature representation without being affected by clothing
color or patterns. We call our model the Color Agnostic Shape Extraction
Network (CASE-Net). CASE-Net learns a representation of identity that depends
only on body shape via adversarial learning and feature disentanglement. Due to
the lack of large-scale re-ID datasets which contain clothing changes for the
same person, we propose two synthetic datasets for evaluation. We create a
rendered dataset SMPL-reID with different clothes patterns and a synthesized
dataset Div-Market with different clothing color to simulate two types of
clothing changes. The quantitative and qualitative results across 5 datasets
(SMPL-reID, Div-Market, two benchmark re-ID datasets, a cross-modality re-ID
dataset) confirm the robustness and superiority of our approach against several
state-of-the-art approaches
Related papers
- DLCR: A Generative Data Expansion Framework via Diffusion for Clothes-Changing Person Re-ID [69.70281727931048]
We propose a novel data expansion framework to generate diverse images of individuals in varied attire.
We generate additional data for five benchmark CC-ReID datasets.
We obtain a large top-1 accuracy improvement of $11.3%$ by training CAL, a previous state of the art (SOTA) method, with DLCR-generated data.
arXiv Detail & Related papers (2024-11-11T18:28:33Z) - Disentangled Representations for Short-Term and Long-Term Person Re-Identification [33.76874948187976]
We propose a new generative adversarial network, dubbed identity shuffle GAN (IS-GAN)
It disentangles identity-related and unrelated features from person images through an identity-shuffling technique.
Experimental results validate the effectiveness of IS-GAN, showing state-of-the-art performance on standard reID benchmarks.
arXiv Detail & Related papers (2024-09-09T02:09:49Z) - Synthesizing Efficient Data with Diffusion Models for Person Re-Identification Pre-Training [51.87027943520492]
We present a novel paradigm Diffusion-ReID to efficiently augment and generate diverse images based on known identities.
Benefiting from our proposed paradigm, we first create a new large-scale person Re-ID dataset Diff-Person, which consists of over 777K images from 5,183 identities.
arXiv Detail & Related papers (2024-06-10T06:26:03Z) - CCPA: Long-term Person Re-Identification via Contrastive Clothing and
Pose Augmentation [2.1756081703276]
Long-term Person Re-Identification aims at matching an individual across cameras after a long period of time.
We propose CCPA: Contrastive Clothing and Pose Augmentation framework for LRe-ID.
arXiv Detail & Related papers (2024-02-22T11:16:34Z) - Clothes-Changing Person Re-identification with RGB Modality Only [102.44387094119165]
We propose a Clothes-based Adrial Loss (CAL) to mine clothes-irrelevant features from the original RGB images.
Videos contain richer appearance and additional temporal information, which can be used to model propertemporal patterns.
arXiv Detail & Related papers (2022-04-14T11:38:28Z) - Unsupervised Pre-training for Person Re-identification [90.98552221699508]
We present a large scale unlabeled person re-identification (Re-ID) dataset "LUPerson"
We make the first attempt of performing unsupervised pre-training for improving the generalization ability of the learned person Re-ID feature representation.
arXiv Detail & Related papers (2020-12-07T14:48:26Z) - Apparel-invariant Feature Learning for Apparel-changed Person
Re-identification [70.16040194572406]
Most public ReID datasets are collected in a short time window in which persons' appearance rarely changes.
In real-world applications such as in a shopping mall, the same person's clothing may change, and different persons may wearing similar clothes.
It is critical to learn an apparel-invariant person representation under cases like cloth changing or several persons wearing similar clothes.
arXiv Detail & Related papers (2020-08-14T03:49:14Z) - Long-Term Cloth-Changing Person Re-identification [154.57752691285046]
Person re-identification (Re-ID) aims to match a target person across camera views at different locations and times.
Existing Re-ID studies focus on the short-term cloth-consistent setting, under which a person re-appears in different camera views with the same outfit.
In this work, we focus on a much more difficult yet practical setting where person matching is conducted over long-duration, e.g., over days and months.
arXiv Detail & Related papers (2020-05-26T11:27:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.