Multigranular Visual-Semantic Embedding for Cloth-Changing Person
Re-identification
- URL: http://arxiv.org/abs/2108.04527v1
- Date: Tue, 10 Aug 2021 09:14:44 GMT
- Title: Multigranular Visual-Semantic Embedding for Cloth-Changing Person
Re-identification
- Authors: Zan Gao, Hongwei Wei, Weili Guan, Weizhi Nie, Meng Liu, Meng Wang
- Abstract summary: This work proposes a novel visual-semantic embedding algorithm (MVSE) for cloth-changing person ReID.
To fully represent a person with clothing changes, a multigranular feature representation scheme (MGR) is employed, and then a cloth desensitization network (CDN) is designed.
A partially semantically aligned network (PSA) is proposed to obtain the visual-semantic information that is used to align the human attributes.
- Score: 38.7806002518266
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Person reidentification (ReID) is a very hot research topic in machine
learning and computer vision, and many person ReID approaches have been
proposed; however, most of these methods assume that the same person has the
same clothes within a short time interval, and thus their visual appearance
must be similar. However, in an actual surveillance environment, a given person
has a great probability of changing clothes after a long time span, and they
also often take different personal belongings with them. When the existing
person ReID methods are applied in this type of case, almost all of them fail.
To date, only a few works have focused on the cloth-changing person ReID task,
but since it is very difficult to extract generalized and robust features for
representing people with different clothes, their performances need to be
improved. Moreover, visual-semantic information is often ignored. To solve
these issues, in this work, a novel multigranular visual-semantic embedding
algorithm (MVSE) is proposed for cloth-changing person ReID, where visual
semantic information and human attributes are embedded into the network, and
the generalized features of human appearance can be well learned to effectively
solve the problem of clothing changes. Specifically, to fully represent a
person with clothing changes, a multigranular feature representation scheme
(MGR) is employed to focus on the unchanged part of the human, and then a cloth
desensitization network (CDN) is designed to improve the feature robustness of
the approach for the person with different clothing, where different high-level
human attributes are fully utilized. Moreover, to further solve the issue of
pose changes and occlusion under different camera perspectives, a partially
semantically aligned network (PSA) is proposed to obtain the visual-semantic
information that is used to align the human attributes.
Related papers
- Content and Salient Semantics Collaboration for Cloth-Changing Person Re-Identification [74.10897798660314]
Cloth-changing person Re-IDentification aims at recognizing the same person with clothing changes across non-overlapping cameras.
We propose the Content and Salient Semantics Collaboration framework, facilitating cross-parallel semantics interaction and refinement.
Our framework is simple yet effective, and the vital design is the Semantics Mining and Refinement (SMR) module.
arXiv Detail & Related papers (2024-05-26T15:17:28Z) - Learning Clothing and Pose Invariant 3D Shape Representation for
Long-Term Person Re-Identification [16.797826602710035]
We aim to extend LT-ReID beyond pedestrian recognition to include a wider range of real-world human activities.
This setting poses additional challenges due to the geometric misalignment and appearance ambiguity caused by the diversity of human pose and clothing.
We propose a new approach 3DInvarReID for disentangling identity from non-identity components.
arXiv Detail & Related papers (2023-08-21T11:51:46Z) - Identity-Guided Collaborative Learning for Cloth-Changing Person
Reidentification [29.200286257496714]
We propose a novel identity-guided collaborative learning scheme (IGCL) for cloth-changing person ReID.
First, we design a novel clothing attention stream to reasonably reduce the interference caused by clothing information.
Second, we propose a human semantic attention and body jigsaw stream to highlight the human semantic information and simulate different poses of the same identity.
Third, a pedestrian identity enhancement stream is further proposed to enhance the identity importance and extract more favorable identity robust features.
arXiv Detail & Related papers (2023-04-10T06:05:54Z) - A Semantic-aware Attention and Visual Shielding Network for
Cloth-changing Person Re-identification [29.026249268566303]
Cloth-changing person reidentification (ReID) is a newly emerging research topic that aims to retrieve pedestrians whose clothes are changed.
Since the human appearance with different clothes exhibits large variations, it is very difficult for existing approaches to extract discriminative and robust feature representations.
This work proposes a novel semantic-aware attention and visual shielding network for cloth-changing person ReID.
arXiv Detail & Related papers (2022-07-18T05:38:37Z) - Video Person Re-identification using Attribute-enhanced Features [49.68392018281875]
We propose a novel network architecture named Attribute Salience Assisted Network (ASA-Net) for attribute-assisted video person Re-ID.
To learn a better separation of the target from background, we propose to learn the visual attention from middle-level attribute instead of high-level identities.
arXiv Detail & Related papers (2021-08-16T07:41:27Z) - Apparel-invariant Feature Learning for Apparel-changed Person
Re-identification [70.16040194572406]
Most public ReID datasets are collected in a short time window in which persons' appearance rarely changes.
In real-world applications such as in a shopping mall, the same person's clothing may change, and different persons may wearing similar clothes.
It is critical to learn an apparel-invariant person representation under cases like cloth changing or several persons wearing similar clothes.
arXiv Detail & Related papers (2020-08-14T03:49:14Z) - Long-Term Cloth-Changing Person Re-identification [154.57752691285046]
Person re-identification (Re-ID) aims to match a target person across camera views at different locations and times.
Existing Re-ID studies focus on the short-term cloth-consistent setting, under which a person re-appears in different camera views with the same outfit.
In this work, we focus on a much more difficult yet practical setting where person matching is conducted over long-duration, e.g., over days and months.
arXiv Detail & Related papers (2020-05-26T11:27:21Z) - Person Re-identification by Contour Sketch under Moderate Clothing
Change [95.83034113646657]
Person re-id, the process of matching pedestrian images across different camera views, is an important task in visual surveillance.
In this work, we call the person re-id under clothing change the "cross-clothes person re-id"
Due to the lack of a large-scale dataset for cross-clothes person re-id, we contribute a new dataset that consists of 33698 images from 221 identities.
arXiv Detail & Related papers (2020-02-06T15:13:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.