Person Re-identification by Contour Sketch under Moderate Clothing
Change
- URL: http://arxiv.org/abs/2002.02295v1
- Date: Thu, 6 Feb 2020 15:13:55 GMT
- Title: Person Re-identification by Contour Sketch under Moderate Clothing
Change
- Authors: Qize Yang, Ancong Wu, Wei-Shi Zheng
- Abstract summary: Person re-id, the process of matching pedestrian images across different camera views, is an important task in visual surveillance.
In this work, we call the person re-id under clothing change the "cross-clothes person re-id"
Due to the lack of a large-scale dataset for cross-clothes person re-id, we contribute a new dataset that consists of 33698 images from 221 identities.
- Score: 95.83034113646657
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Person re-identification (re-id), the process of matching pedestrian images
across different camera views, is an important task in visual surveillance.
Substantial development of re-id has recently been observed, and the majority
of existing models are largely dependent on color appearance and assume that
pedestrians do not change their clothes across camera views. This limitation,
however, can be an issue for re-id when tracking a person at different places
and at different time if that person (e.g., a criminal suspect) changes his/her
clothes, causing most existing methods to fail, since they are heavily relying
on color appearance and thus they are inclined to match a person to another
person wearing similar clothes. In this work, we call the person re-id under
clothing change the "cross-clothes person re-id". In particular, we consider
the case when a person only changes his clothes moderately as a first attempt
at solving this problem based on visible light images; that is we assume that a
person wears clothes of a similar thickness, and thus the shape of a person
would not change significantly when the weather does not change substantially
within a short period of time. We perform cross-clothes person re-id based on a
contour sketch of person image to take advantage of the shape of the human body
instead of color information for extracting features that are robust to
moderate clothing change. Due to the lack of a large-scale dataset for
cross-clothes person re-id, we contribute a new dataset that consists of 33698
images from 221 identities. Our experiments illustrate the challenges of
cross-clothes person re-id and demonstrate the effectiveness of our proposed
method.
Related papers
- Unsupervised Long-Term Person Re-Identification with Clothes Change [46.54514001691254]
We investigate unsupervised person re-identification (Re-ID) with clothes change.
Most existing re-id methods artificially assume the clothes of every single person to be stationary across space and time.
We introduce a novel Curriculum Person Clustering (CPC) method that can adaptively regulate the unsupervised clustering criterion.
arXiv Detail & Related papers (2022-02-07T11:55:23Z) - Unsupervised clothing change adaptive person ReID [14.777001614779806]
We design a novel unsupervised model, Sync-Person-Cloud ReID, to solve the unsupervised clothing change person ReID problem.
The person sync augmentation is to supply additional same person resources. These same person's resources can be used as part supervised input by same person feature restriction.
arXiv Detail & Related papers (2021-09-08T15:08:10Z) - Multigranular Visual-Semantic Embedding for Cloth-Changing Person
Re-identification [38.7806002518266]
This work proposes a novel visual-semantic embedding algorithm (MVSE) for cloth-changing person ReID.
To fully represent a person with clothing changes, a multigranular feature representation scheme (MGR) is employed, and then a cloth desensitization network (CDN) is designed.
A partially semantically aligned network (PSA) is proposed to obtain the visual-semantic information that is used to align the human attributes.
arXiv Detail & Related papers (2021-08-10T09:14:44Z) - Long-term Person Re-identification: A Benchmark [57.97182942537195]
In realworld we often dress ourselves differently across locations, time, dates, seasons, weather, and events.
This work contributes timely a large, realistic long-term person re-identification benchmark.
It consists of 171K bounding boxes from 1.1K person identities, collected and constructed over a course of 12 months.
arXiv Detail & Related papers (2021-05-31T03:35:00Z) - Apparel-invariant Feature Learning for Apparel-changed Person
Re-identification [70.16040194572406]
Most public ReID datasets are collected in a short time window in which persons' appearance rarely changes.
In real-world applications such as in a shopping mall, the same person's clothing may change, and different persons may wearing similar clothes.
It is critical to learn an apparel-invariant person representation under cases like cloth changing or several persons wearing similar clothes.
arXiv Detail & Related papers (2020-08-14T03:49:14Z) - Long-Term Cloth-Changing Person Re-identification [154.57752691285046]
Person re-identification (Re-ID) aims to match a target person across camera views at different locations and times.
Existing Re-ID studies focus on the short-term cloth-consistent setting, under which a person re-appears in different camera views with the same outfit.
In this work, we focus on a much more difficult yet practical setting where person matching is conducted over long-duration, e.g., over days and months.
arXiv Detail & Related papers (2020-05-26T11:27:21Z) - COCAS: A Large-Scale Clothes Changing Person Dataset for
Re-identification [88.79807574669294]
We construct a novel large-scale re-id benchmark named ClOthes ChAnging Person Set (COCAS)
COCAS totally contains 62,382 body images from 5,266 persons.
We introduce a new person re-id setting for clothes changing problem, where the query includes both a clothes template and a person image taking another clothes.
arXiv Detail & Related papers (2020-05-16T03:50:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.