Video Person Re-identification using Attribute-enhanced Features
- URL: http://arxiv.org/abs/2108.06946v1
- Date: Mon, 16 Aug 2021 07:41:27 GMT
- Title: Video Person Re-identification using Attribute-enhanced Features
- Authors: Tianrui Chai, Zhiyuan Chen, Annan Li, Jiaxin Chen, Xinyu Mei, Yunhong
Wang
- Abstract summary: We propose a novel network architecture named Attribute Salience Assisted Network (ASA-Net) for attribute-assisted video person Re-ID.
To learn a better separation of the target from background, we propose to learn the visual attention from middle-level attribute instead of high-level identities.
- Score: 49.68392018281875
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Video-based person re-identification (Re-ID) which aims to associate people
across non-overlapping cameras using surveillance video is a challenging task.
Pedestrian attribute, such as gender, age and clothing characteristics contains
rich and supplementary information but is less explored in video person Re-ID.
In this work, we propose a novel network architecture named Attribute Salience
Assisted Network (ASA-Net) for attribute-assisted video person Re-ID, which
achieved considerable improvement to existing works by two methods.First, to
learn a better separation of the target from background, we propose to learn
the visual attention from middle-level attribute instead of high-level
identities. The proposed Attribute Salient Region Enhance (ASRE) module can
attend more accurately on the body of pedestrian. Second, we found that many
identity-irrelevant but object or subject-relevant factors like the view angle
and movement of the target pedestrian can greatly influence the two dimensional
appearance of a pedestrian. This problem can be mitigated by investigating both
identity-relevant and identity-irrelevant attributes via a novel triplet loss
which is referred as the Pose~\&~Motion-Invariant (PMI) triplet loss.
Related papers
- Disentangled Representations for Short-Term and Long-Term Person Re-Identification [33.76874948187976]
We propose a new generative adversarial network, dubbed identity shuffle GAN (IS-GAN)
It disentangles identity-related and unrelated features from person images through an identity-shuffling technique.
Experimental results validate the effectiveness of IS-GAN, showing state-of-the-art performance on standard reID benchmarks.
arXiv Detail & Related papers (2024-09-09T02:09:49Z) - Identity-Guided Collaborative Learning for Cloth-Changing Person
Reidentification [29.200286257496714]
We propose a novel identity-guided collaborative learning scheme (IGCL) for cloth-changing person ReID.
First, we design a novel clothing attention stream to reasonably reduce the interference caused by clothing information.
Second, we propose a human semantic attention and body jigsaw stream to highlight the human semantic information and simulate different poses of the same identity.
Third, a pedestrian identity enhancement stream is further proposed to enhance the identity importance and extract more favorable identity robust features.
arXiv Detail & Related papers (2023-04-10T06:05:54Z) - Multigranular Visual-Semantic Embedding for Cloth-Changing Person
Re-identification [38.7806002518266]
This work proposes a novel visual-semantic embedding algorithm (MVSE) for cloth-changing person ReID.
To fully represent a person with clothing changes, a multigranular feature representation scheme (MGR) is employed, and then a cloth desensitization network (CDN) is designed.
A partially semantically aligned network (PSA) is proposed to obtain the visual-semantic information that is used to align the human attributes.
arXiv Detail & Related papers (2021-08-10T09:14:44Z) - Multi-Attribute Enhancement Network for Person Search [7.85420914437147]
Person Search is designed to jointly solve the problems of Person Detection and Person Re-identification (Re-ID)
Visual character attributes play a key role in retrieving the query person, which has been explored in Re-ID but has been ignored in Person Search.
We introduce attribute learning into the model, allowing the use of attribute features for retrieval task.
arXiv Detail & Related papers (2021-02-16T05:43:47Z) - AttributeNet: Attribute Enhanced Vehicle Re-Identification [70.89289512099242]
We introduce AttributeNet (ANet) that jointly extracts identity-relevant features and attribute features.
We enable the interaction by distilling the ReID-helpful attribute feature and adding it into the general ReID feature to increase the discrimination power.
We validate the effectiveness of our framework on three challenging datasets.
arXiv Detail & Related papers (2021-02-07T19:51:02Z) - PoseTrackReID: Dataset Description [97.7241689753353]
Pose information is helpful to disentangle useful feature information from background or occlusion noise.
With PoseTrackReID, we want to bridge the gap between person re-ID and multi-person pose tracking.
This dataset provides a good benchmark for current state-of-the-art methods on multi-frame person re-ID.
arXiv Detail & Related papers (2020-11-12T07:44:25Z) - Identity-Aware Multi-Sentence Video Description [105.13845996039277]
We introduce an auxiliary task of Fill-in the Identity, that aims to predict persons' IDs consistently within a set of clips.
One of the key components is a gender-aware textual representation as well an additional gender prediction objective in the main model.
Experiments show that our proposed Fill-in the Identity model is superior to several baselines and recent works.
arXiv Detail & Related papers (2020-08-22T09:50:43Z) - Learning Person Re-identification Models from Videos with Weak
Supervision [53.53606308822736]
We introduce the problem of learning person re-identification models from videos with weak supervision.
We propose a multiple instance attention learning framework for person re-identification using such video-level labels.
The attention weights are obtained based on all person images instead of person tracklets in a video, making our learned model less affected by noisy annotations.
arXiv Detail & Related papers (2020-07-21T07:23:32Z) - Attribute-aware Identity-hard Triplet Loss for Video-based Person
Re-identification [51.110453988705395]
Video-based person re-identification (Re-ID) is an important computer vision task.
We introduce a new metric learning method called Attribute-aware Identity-hard Triplet Loss (AITL)
To achieve a complete model of video-based person Re-ID, a multi-task framework with Attribute-driven Spatio-Temporal Attention (ASTA) mechanism is also proposed.
arXiv Detail & Related papers (2020-06-13T09:15:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.