Pose Invariant Person Re-Identification using Robust Pose-transformation
GAN
- URL: http://arxiv.org/abs/2105.00930v1
- Date: Sun, 11 Apr 2021 15:47:03 GMT
- Title: Pose Invariant Person Re-Identification using Robust Pose-transformation
GAN
- Authors: Arnab Karmakar and Deepak Mishra
- Abstract summary: Person re-identification (re-ID) aims to retrieve a person's images from an image gallery, given a single instance of the person of interest.
Despite several advancements, learning discriminative identity-sensitive and viewpoint invariant features for robust Person Re-identification is a major challenge owing to large pose variation of humans.
This paper proposes a re-ID pipeline that utilizes the image generation capability of Generative Adversarial Networks combined with pose regression and feature fusion to achieve pose invariant feature learning.
- Score: 11.338815177557645
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Person re-identification (re-ID) aims to retrieve a person's images from an
image gallery, given a single instance of the person of interest. Despite
several advancements, learning discriminative identity-sensitive and viewpoint
invariant features for robust Person Re-identification is a major challenge
owing to large pose variation of humans. This paper proposes a re-ID pipeline
that utilizes the image generation capability of Generative Adversarial
Networks combined with pose regression and feature fusion to achieve pose
invariant feature learning. The objective is to model a given person under
different viewpoints and large pose changes and extract the most discriminative
features from all the appearances. The pose transformational GAN (pt-GAN)
module is trained to generate a person's image in any given pose. In order to
identify the most significant poses for discriminative feature extraction, a
Pose Regression module is proposed. The given instance of the person is
modelled in varying poses and these features are effectively combined through
the Feature Fusion Network. The final re-ID model consisting of these 3
sub-blocks, alleviates the pose dependence in person re-ID and outperforms the
state-of-the-art GAN based models for re-ID in 4 benchmark datasets. The
proposed model is robust to occlusion, scale and illumination, thereby
outperforms the state-of-the-art models in terms of improvement over baseline.
Related papers
- Exploring Stronger Transformer Representation Learning for Occluded Person Re-Identification [2.552131151698595]
We proposed a novel self-supervision and supervision combining transformer-based person re-identification framework, namely SSSC-TransReID.
We designed a self-supervised contrastive learning branch, which can enhance the feature representation for person re-identification without negative samples or additional pre-training.
Our proposed model obtains superior Re-ID performance consistently and outperforms the state-of-the-art ReID methods by large margins on the mean average accuracy (mAP) and Rank-1 accuracy.
arXiv Detail & Related papers (2024-10-21T03:17:25Z) - Disentangled Representations for Short-Term and Long-Term Person Re-Identification [33.76874948187976]
We propose a new generative adversarial network, dubbed identity shuffle GAN (IS-GAN)
It disentangles identity-related and unrelated features from person images through an identity-shuffling technique.
Experimental results validate the effectiveness of IS-GAN, showing state-of-the-art performance on standard reID benchmarks.
arXiv Detail & Related papers (2024-09-09T02:09:49Z) - Pose-dIVE: Pose-Diversified Augmentation with Diffusion Model for Person Re-Identification [28.794827024749658]
Pose-dIVE is a novel data augmentation approach that incorporates sparse and underrepresented human pose and camera viewpoint examples into the training data.
Our objective is to augment the training dataset to enable existing Re-ID models to learn features unbiased by human pose and camera viewpoint variations.
arXiv Detail & Related papers (2024-06-23T07:48:21Z) - Synthesizing Efficient Data with Diffusion Models for Person Re-Identification Pre-Training [51.87027943520492]
We present a novel paradigm Diffusion-ReID to efficiently augment and generate diverse images based on known identities.
Benefiting from our proposed paradigm, we first create a new large-scale person Re-ID dataset Diff-Person, which consists of over 777K images from 5,183 identities.
arXiv Detail & Related papers (2024-06-10T06:26:03Z) - FaceDancer: Pose- and Occlusion-Aware High Fidelity Face Swapping [62.38898610210771]
We present a new single-stage method for subject face swapping and identity transfer, named FaceDancer.
We have two major contributions: Adaptive Feature Fusion Attention (AFFA) and Interpreted Feature Similarity Regularization (IFSR)
arXiv Detail & Related papers (2022-10-19T11:31:38Z) - Dynamic Prototype Mask for Occluded Person Re-Identification [88.7782299372656]
Existing methods mainly address this issue by employing body clues provided by an extra network to distinguish the visible part.
We propose a novel Dynamic Prototype Mask (DPM) based on two self-evident prior knowledge.
Under this condition, the occluded representation could be well aligned in a selected subspace spontaneously.
arXiv Detail & Related papers (2022-07-19T03:31:13Z) - Pose-driven Attention-guided Image Generation for Person
Re-Identification [39.605062525247135]
We propose an end-to-end pose-driven generative adversarial network to generate multiple poses of a person.
A semantic-consistency loss is proposed to preserve the semantic information of the person during pose transfer.
We show that by incorporating the proposed approach in a person re-identification framework, realistic pose transferred images and state-of-the-art re-identification results can be achieved.
arXiv Detail & Related papers (2021-04-28T14:02:24Z) - Resolution-invariant Person ReID Based on Feature Transformation and
Self-weighted Attention [14.777001614779806]
Person Re-identification (ReID) is a critical computer vision task which aims to match the same person in images or video sequences.
We propose a novel two-stream network with a lightweight resolution association ReID feature transformation (RAFT) module and a self-weighted attention (SWA) ReID module.
Both modules are jointly trained to get a resolution-invariant representation.
arXiv Detail & Related papers (2021-01-12T15:22:41Z) - PoNA: Pose-guided Non-local Attention for Human Pose Transfer [105.14398322129024]
We propose a new human pose transfer method using a generative adversarial network (GAN) with simplified cascaded blocks.
Our model generates sharper and more realistic images with rich details, while having fewer parameters and faster speed.
arXiv Detail & Related papers (2020-12-13T12:38:29Z) - Style Normalization and Restitution for Generalizable Person
Re-identification [89.482638433932]
We design a generalizable person ReID framework which trains a model on source domains yet is able to generalize/perform well on target domains.
We propose a simple yet effective Style Normalization and Restitution (SNR) module.
Our models empowered by the SNR modules significantly outperform the state-of-the-art domain generalization approaches on multiple widely-used person ReID benchmarks.
arXiv Detail & Related papers (2020-05-22T07:15:10Z) - Cross-Resolution Adversarial Dual Network for Person Re-Identification
and Beyond [59.149653740463435]
Person re-identification (re-ID) aims at matching images of the same person across camera views.
Due to varying distances between cameras and persons of interest, resolution mismatch can be expected.
We propose a novel generative adversarial network to address cross-resolution person re-ID.
arXiv Detail & Related papers (2020-02-19T07:21:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.