Pose-driven Attention-guided Image Generation for Person
Re-Identification
- URL: http://arxiv.org/abs/2104.13773v1
- Date: Wed, 28 Apr 2021 14:02:24 GMT
- Title: Pose-driven Attention-guided Image Generation for Person
Re-Identification
- Authors: Amena Khatun, Simon Denman, Sridha Sridharan, Clinton Fookes
- Abstract summary: We propose an end-to-end pose-driven generative adversarial network to generate multiple poses of a person.
A semantic-consistency loss is proposed to preserve the semantic information of the person during pose transfer.
We show that by incorporating the proposed approach in a person re-identification framework, realistic pose transferred images and state-of-the-art re-identification results can be achieved.
- Score: 39.605062525247135
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Person re-identification (re-ID) concerns the matching of subject images
across different camera views in a multi camera surveillance system. One of the
major challenges in person re-ID is pose variations across the camera network,
which significantly affects the appearance of a person. Existing development
data lack adequate pose variations to carry out effective training of person
re-ID systems. To solve this issue, in this paper we propose an end-to-end
pose-driven attention-guided generative adversarial network, to generate
multiple poses of a person. We propose to attentively learn and transfer the
subject pose through an attention mechanism. A semantic-consistency loss is
proposed to preserve the semantic information of the person during pose
transfer. To ensure fine image details are realistic after pose translation, an
appearance discriminator is used while a pose discriminator is used to ensure
the pose of the transferred images will exactly be the same as the target pose.
We show that by incorporating the proposed approach in a person
re-identification framework, realistic pose transferred images and
state-of-the-art re-identification results can be achieved.
Related papers
- Disentangled Representations for Short-Term and Long-Term Person Re-Identification [33.76874948187976]
We propose a new generative adversarial network, dubbed identity shuffle GAN (IS-GAN)
It disentangles identity-related and unrelated features from person images through an identity-shuffling technique.
Experimental results validate the effectiveness of IS-GAN, showing state-of-the-art performance on standard reID benchmarks.
arXiv Detail & Related papers (2024-09-09T02:09:49Z) - Towards Privacy-Preserving Person Re-identification via Person Identify
Shift [19.212691296927165]
Person re-identification (ReID) requires preserving the privacy of pedestrian images used by ReID methods.
We propose a novel de-identification method designed explicitly for person ReID, named Person Identify Shift (PIS)
PIS shifts each pedestrian image from the current identity to another with a new identity, resulting in images still preserving the relative identities.
arXiv Detail & Related papers (2022-07-15T06:58:41Z) - Semantic Consistency and Identity Mapping Multi-Component Generative
Adversarial Network for Person Re-Identification [39.605062525247135]
We propose a semantic consistency and identity mapping multi-component generative adversarial network (SC-IMGAN) which provides style adaptation from one to many domains.
Our proposed method outperforms state-of-the-art techniques on six challenging person Re-ID datasets.
arXiv Detail & Related papers (2021-04-28T14:12:29Z) - Pose Invariant Person Re-Identification using Robust Pose-transformation
GAN [11.338815177557645]
Person re-identification (re-ID) aims to retrieve a person's images from an image gallery, given a single instance of the person of interest.
Despite several advancements, learning discriminative identity-sensitive and viewpoint invariant features for robust Person Re-identification is a major challenge owing to large pose variation of humans.
This paper proposes a re-ID pipeline that utilizes the image generation capability of Generative Adversarial Networks combined with pose regression and feature fusion to achieve pose invariant feature learning.
arXiv Detail & Related papers (2021-04-11T15:47:03Z) - Progressive and Aligned Pose Attention Transfer for Person Image
Generation [59.87492938953545]
This paper proposes a new generative adversarial network for pose transfer, i.e., transferring the pose of a given person to a target pose.
We use two types of blocks, namely Pose-Attentional Transfer Block (PATB) and Aligned Pose-Attentional Transfer Bloc (APATB)
We verify the efficacy of the model on the Market-1501 and DeepFashion datasets, using quantitative and qualitative measures.
arXiv Detail & Related papers (2021-03-22T07:24:57Z) - PoNA: Pose-guided Non-local Attention for Human Pose Transfer [105.14398322129024]
We propose a new human pose transfer method using a generative adversarial network (GAN) with simplified cascaded blocks.
Our model generates sharper and more realistic images with rich details, while having fewer parameters and faster speed.
arXiv Detail & Related papers (2020-12-13T12:38:29Z) - PoseTrackReID: Dataset Description [97.7241689753353]
Pose information is helpful to disentangle useful feature information from background or occlusion noise.
With PoseTrackReID, we want to bridge the gap between person re-ID and multi-person pose tracking.
This dataset provides a good benchmark for current state-of-the-art methods on multi-frame person re-ID.
arXiv Detail & Related papers (2020-11-12T07:44:25Z) - Person image generation with semantic attention network for person
re-identification [9.30413920076019]
We propose a novel person pose-guided image generation method, which is called the semantic attention network.
The network consists of several semantic attention blocks, where each block attends to preserve and update the pose code and the clothing textures.
Compared with other methods, our network can characterize better body shape and keep clothing attributes, simultaneously.
arXiv Detail & Related papers (2020-08-18T12:18:51Z) - Cross-Resolution Adversarial Dual Network for Person Re-Identification
and Beyond [59.149653740463435]
Person re-identification (re-ID) aims at matching images of the same person across camera views.
Due to varying distances between cameras and persons of interest, resolution mismatch can be expected.
We propose a novel generative adversarial network to address cross-resolution person re-ID.
arXiv Detail & Related papers (2020-02-19T07:21:38Z) - Uncertainty-Aware Multi-Shot Knowledge Distillation for Image-Based
Object Re-Identification [93.39253443415392]
We propose exploiting the multi-shots of the same identity to guide the feature learning of each individual image.
It consists of a teacher network (T-net) that learns the comprehensive features from multiple images of the same object, and a student network (S-net) that takes a single image as input.
We validate the effectiveness of our approach on the popular vehicle re-id and person re-id datasets.
arXiv Detail & Related papers (2020-01-15T09:39:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.