Semantic Consistency and Identity Mapping Multi-Component Generative
Adversarial Network for Person Re-Identification
- URL: http://arxiv.org/abs/2104.13780v1
- Date: Wed, 28 Apr 2021 14:12:29 GMT
- Title: Semantic Consistency and Identity Mapping Multi-Component Generative
Adversarial Network for Person Re-Identification
- Authors: Amena Khatun, Simon Denman, Sridha Sridharan, Clinton Fookes
- Abstract summary: We propose a semantic consistency and identity mapping multi-component generative adversarial network (SC-IMGAN) which provides style adaptation from one to many domains.
Our proposed method outperforms state-of-the-art techniques on six challenging person Re-ID datasets.
- Score: 39.605062525247135
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In a real world environment, person re-identification (Re-ID) is a
challenging task due to variations in lighting conditions, viewing angles, pose
and occlusions. Despite recent performance gains, current person Re-ID
algorithms still suffer heavily when encountering these variations. To address
this problem, we propose a semantic consistency and identity mapping
multi-component generative adversarial network (SC-IMGAN) which provides style
adaptation from one to many domains. To ensure that transformed images are as
realistic as possible, we propose novel identity mapping and semantic
consistency losses to maintain identity across the diverse domains. For the
Re-ID task, we propose a joint verification-identification quartet network
which is trained with generated and real images, followed by an effective
quartet loss for verification. Our proposed method outperforms state-of-the-art
techniques on six challenging person Re-ID datasets: CUHK01, CUHK03, VIPeR,
PRID2011, iLIDS and Market-1501.
Related papers
- Disentangled Representations for Short-Term and Long-Term Person Re-Identification [33.76874948187976]
We propose a new generative adversarial network, dubbed identity shuffle GAN (IS-GAN)
It disentangles identity-related and unrelated features from person images through an identity-shuffling technique.
Experimental results validate the effectiveness of IS-GAN, showing state-of-the-art performance on standard reID benchmarks.
arXiv Detail & Related papers (2024-09-09T02:09:49Z) - Synthesizing Efficient Data with Diffusion Models for Person Re-Identification Pre-Training [51.87027943520492]
We present a novel paradigm Diffusion-ReID to efficiently augment and generate diverse images based on known identities.
Benefiting from our proposed paradigm, we first create a new large-scale person Re-ID dataset Diff-Person, which consists of over 777K images from 5,183 identities.
arXiv Detail & Related papers (2024-06-10T06:26:03Z) - ConsistentID: Portrait Generation with Multimodal Fine-Grained Identity Preserving [66.09976326184066]
ConsistentID is an innovative method crafted for diverseidentity-preserving portrait generation under fine-grained multimodal facial prompts.
We present a fine-grained portrait dataset, FGID, with over 500,000 facial images, offering greater diversity and comprehensiveness than existing public facial datasets.
arXiv Detail & Related papers (2024-04-25T17:23:43Z) - ID-Aligner: Enhancing Identity-Preserving Text-to-Image Generation with Reward Feedback Learning [57.91881829308395]
Identity-preserving text-to-image generation (ID-T2I) has received significant attention due to its wide range of application scenarios like AI portrait and advertising.
We present textbfID-Aligner, a general feedback learning framework to enhance ID-T2I performance.
arXiv Detail & Related papers (2024-04-23T18:41:56Z) - HFORD: High-Fidelity and Occlusion-Robust De-identification for Face
Privacy Protection [60.63915939982923]
Face de-identification is a practical way to solve the identity protection problem.
The existing facial de-identification methods have revealed several problems.
We present a High-Fidelity and Occlusion-Robust De-identification (HFORD) method to deal with these issues.
arXiv Detail & Related papers (2023-11-15T08:59:02Z) - StyleID: Identity Disentanglement for Anonymizing Faces [4.048444203617942]
The main contribution of the paper is the design of a feature-preserving anonymization framework, StyleID.
As part of the contribution, we present a novel disentanglement metric, three complementing disentanglement methods, and new insights into identity disentanglement.
StyleID provides tunable privacy, has low computational complexity, and is shown to outperform current state-of-the-art solutions.
arXiv Detail & Related papers (2022-12-28T12:04:24Z) - A Systematical Solution for Face De-identification [6.244117712209321]
In different tasks, people have various requirements for face de-identification (De-ID)
We propose a systematical solution compatible for these De-ID operations.
Our method can flexibly de-identify the face data in various ways and the processed images have high image quality.
arXiv Detail & Related papers (2021-07-19T02:02:51Z) - Pose-driven Attention-guided Image Generation for Person
Re-Identification [39.605062525247135]
We propose an end-to-end pose-driven generative adversarial network to generate multiple poses of a person.
A semantic-consistency loss is proposed to preserve the semantic information of the person during pose transfer.
We show that by incorporating the proposed approach in a person re-identification framework, realistic pose transferred images and state-of-the-art re-identification results can be achieved.
arXiv Detail & Related papers (2021-04-28T14:02:24Z) - Cross-Resolution Adversarial Dual Network for Person Re-Identification
and Beyond [59.149653740463435]
Person re-identification (re-ID) aims at matching images of the same person across camera views.
Due to varying distances between cameras and persons of interest, resolution mismatch can be expected.
We propose a novel generative adversarial network to address cross-resolution person re-ID.
arXiv Detail & Related papers (2020-02-19T07:21:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.