Apparel-invariant Feature Learning for Apparel-changed Person
Re-identification
- URL: http://arxiv.org/abs/2008.06181v2
- Date: Mon, 17 Aug 2020 03:14:12 GMT
- Title: Apparel-invariant Feature Learning for Apparel-changed Person
Re-identification
- Authors: Zhengxu Yu, Yilun Zhao, Bin Hong, Zhongming Jin, Jianqiang Huang, Deng
Cai, Xiaofei He, Xian-Sheng Hua
- Abstract summary: Most public ReID datasets are collected in a short time window in which persons' appearance rarely changes.
In real-world applications such as in a shopping mall, the same person's clothing may change, and different persons may wearing similar clothes.
It is critical to learn an apparel-invariant person representation under cases like cloth changing or several persons wearing similar clothes.
- Score: 70.16040194572406
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rise of deep learning methods, person Re-Identification (ReID)
performance has been improved tremendously in many public datasets. However,
most public ReID datasets are collected in a short time window in which
persons' appearance rarely changes. In real-world applications such as in a
shopping mall, the same person's clothing may change, and different persons may
wearing similar clothes. All these cases can result in an inconsistent ReID
performance, revealing a critical problem that current ReID models heavily rely
on person's apparels. Therefore, it is critical to learn an apparel-invariant
person representation under cases like cloth changing or several persons
wearing similar clothes. In this work, we tackle this problem from the
viewpoint of invariant feature representation learning. The main contributions
of this work are as follows. (1) We propose the semi-supervised
Apparel-invariant Feature Learning (AIFL) framework to learn an
apparel-invariant pedestrian representation using images of the same person
wearing different clothes. (2) To obtain images of the same person wearing
different clothes, we propose an unsupervised apparel-simulation GAN (AS-GAN)
to synthesize cloth changing images according to the target cloth embedding.
It's worth noting that the images used in ReID tasks were cropped from
real-world low-quality CCTV videos, making it more challenging to synthesize
cloth changing images. We conduct extensive experiments on several datasets
comparing with several baselines. Experimental results demonstrate that our
proposal can improve the ReID performance of the baseline models.
Related papers
- CCPA: Long-term Person Re-Identification via Contrastive Clothing and
Pose Augmentation [2.1756081703276]
Long-term Person Re-Identification aims at matching an individual across cameras after a long period of time.
We propose CCPA: Contrastive Clothing and Pose Augmentation framework for LRe-ID.
arXiv Detail & Related papers (2024-02-22T11:16:34Z) - Clothes-Invariant Feature Learning by Causal Intervention for
Clothes-Changing Person Re-identification [118.23912884472794]
Clothes-invariant feature extraction is critical to the clothes-changing person re-identification (CC-ReID)
We argue that there exists a strong spurious correlation between clothes and human identity, that restricts the common likelihood-based ReID method P(Y|X) to extract clothes-irrelevant features.
We propose a new Causal Clothes-Invariant Learning (CCIL) method to achieve clothes-invariant feature learning.
arXiv Detail & Related papers (2023-05-10T13:48:24Z) - GEFF: Improving Any Clothes-Changing Person ReID Model using Gallery
Enrichment with Face Features [11.189236254478057]
In Clothes-Changing Re-Identification (CC-ReID) problem, given a query sample of a person, the goal is to determine the correct identity based on a labeled gallery in which the person appears in different clothes.
Several models tackle this challenge by extracting clothes-independent features.
As clothing-related features are often dominant features in the data, we propose a new process we call Gallery Enrichment.
arXiv Detail & Related papers (2022-11-24T21:41:52Z) - Clothes-Changing Person Re-identification with RGB Modality Only [102.44387094119165]
We propose a Clothes-based Adrial Loss (CAL) to mine clothes-irrelevant features from the original RGB images.
Videos contain richer appearance and additional temporal information, which can be used to model propertemporal patterns.
arXiv Detail & Related papers (2022-04-14T11:38:28Z) - Unsupervised clothing change adaptive person ReID [14.777001614779806]
We design a novel unsupervised model, Sync-Person-Cloud ReID, to solve the unsupervised clothing change person ReID problem.
The person sync augmentation is to supply additional same person resources. These same person's resources can be used as part supervised input by same person feature restriction.
arXiv Detail & Related papers (2021-09-08T15:08:10Z) - Multigranular Visual-Semantic Embedding for Cloth-Changing Person
Re-identification [38.7806002518266]
This work proposes a novel visual-semantic embedding algorithm (MVSE) for cloth-changing person ReID.
To fully represent a person with clothing changes, a multigranular feature representation scheme (MGR) is employed, and then a cloth desensitization network (CDN) is designed.
A partially semantically aligned network (PSA) is proposed to obtain the visual-semantic information that is used to align the human attributes.
arXiv Detail & Related papers (2021-08-10T09:14:44Z) - Long-Term Cloth-Changing Person Re-identification [154.57752691285046]
Person re-identification (Re-ID) aims to match a target person across camera views at different locations and times.
Existing Re-ID studies focus on the short-term cloth-consistent setting, under which a person re-appears in different camera views with the same outfit.
In this work, we focus on a much more difficult yet practical setting where person matching is conducted over long-duration, e.g., over days and months.
arXiv Detail & Related papers (2020-05-26T11:27:21Z) - COCAS: A Large-Scale Clothes Changing Person Dataset for
Re-identification [88.79807574669294]
We construct a novel large-scale re-id benchmark named ClOthes ChAnging Person Set (COCAS)
COCAS totally contains 62,382 body images from 5,266 persons.
We introduce a new person re-id setting for clothes changing problem, where the query includes both a clothes template and a person image taking another clothes.
arXiv Detail & Related papers (2020-05-16T03:50:08Z) - Learning Shape Representations for Clothing Variations in Person
Re-Identification [34.559050607889816]
Person re-identification (re-ID) aims to recognize instances of the same person contained in multiple images taken across different cameras.
We propose a novel representation learning model which is able to generate a body shape feature representation without being affected by clothing color or patterns.
Case-Net learns a representation of identity that depends only on body shape via adversarial learning and feature disentanglement.
arXiv Detail & Related papers (2020-03-16T17:23:50Z) - Person Re-identification by Contour Sketch under Moderate Clothing
Change [95.83034113646657]
Person re-id, the process of matching pedestrian images across different camera views, is an important task in visual surveillance.
In this work, we call the person re-id under clothing change the "cross-clothes person re-id"
Due to the lack of a large-scale dataset for cross-clothes person re-id, we contribute a new dataset that consists of 33698 images from 221 identities.
arXiv Detail & Related papers (2020-02-06T15:13:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.