ExFaceGAN: Exploring Identity Directions in GAN's Learned Latent Space
for Synthetic Identity Generation
- URL: http://arxiv.org/abs/2307.05151v2
- Date: Tue, 18 Jul 2023 21:40:51 GMT
- Title: ExFaceGAN: Exploring Identity Directions in GAN's Learned Latent Space
for Synthetic Identity Generation
- Authors: Fadi Boutros, Marcel Klemt, Meiling Fang, Arjan Kuijper, Naser Damer
- Abstract summary: We propose a framework, ExFaceGAN, to disentangle identity information in pretrained GANs latent spaces.
By sampling from each side of the boundary, our ExFaceGAN can generate multiple samples of synthetic identity.
As an example, we empirically prove that data generated by ExFaceGAN can be successfully used to train face recognition models.
- Score: 16.494722503803196
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep generative models have recently presented impressive results in
generating realistic face images of random synthetic identities.
To generate multiple samples of a certain synthetic identity, previous works
proposed to disentangle the latent space of GANs by incorporating additional
supervision or regularization, enabling the manipulation of certain attributes.
Others proposed to disentangle specific factors in unconditional pretrained
GANs latent spaces to control their output, which also requires supervision by
attribute classifiers. Moreover, these attributes are entangled in GAN's latent
space, making it difficult to manipulate them without affecting the identity
information. We propose in this work a framework, ExFaceGAN, to disentangle
identity information in pretrained GANs latent spaces, enabling the generation
of multiple samples of any synthetic identity. Given a reference latent code of
any synthetic image and latent space of pretrained GAN, our ExFaceGAN learns an
identity directional boundary that disentangles the latent space into two
sub-spaces, with latent codes of samples that are either identity similar or
dissimilar to a reference image. By sampling from each side of the boundary,
our ExFaceGAN can generate multiple samples of synthetic identity without the
need for designing a dedicated architecture or supervision from attribute
classifiers. We demonstrate the generalizability and effectiveness of ExFaceGAN
by integrating it into learned latent spaces of three SOTA GAN approaches. As
an example of the practical benefit of our ExFaceGAN, we empirically prove that
data generated by ExFaceGAN can be successfully used to train face recognition
models (\url{https://github.com/fdbtrs/ExFaceGAN}).
Related papers
- ID$^3$: Identity-Preserving-yet-Diversified Diffusion Models for Synthetic Face Recognition [60.15830516741776]
Synthetic face recognition (SFR) aims to generate datasets that mimic the distribution of real face data.
We introduce a diffusion-fueled SFR model termed $textID3$.
$textID3$ employs an ID-preserving loss to generate diverse yet identity-consistent facial appearances.
arXiv Detail & Related papers (2024-09-26T06:46:40Z) - G2Face: High-Fidelity Reversible Face Anonymization via Generative and Geometric Priors [71.69161292330504]
Reversible face anonymization seeks to replace sensitive identity information in facial images with synthesized alternatives.
This paper introduces Gtextsuperscript2Face, which leverages both generative and geometric priors to enhance identity manipulation.
Our method outperforms existing state-of-the-art techniques in face anonymization and recovery, while preserving high data utility.
arXiv Detail & Related papers (2024-08-18T12:36:47Z) - CharacterFactory: Sampling Consistent Characters with GANs for Diffusion Models [58.37569942713456]
CharacterFactory is a framework that allows sampling new characters with consistent identities in the latent space of GANs.
The whole model only takes 10 minutes for training, and can sample infinite characters end-to-end during inference.
arXiv Detail & Related papers (2024-04-24T06:15:31Z) - CLR-Face: Conditional Latent Refinement for Blind Face Restoration Using
Score-Based Diffusion Models [57.9771859175664]
Recent generative-prior-based methods have shown promising blind face restoration performance.
Generating fine-grained facial details faithful to inputs remains a challenging problem.
We introduce a diffusion-based-prior inside a VQGAN architecture that focuses on learning the distribution over uncorrupted latent embeddings.
arXiv Detail & Related papers (2024-02-08T23:51:49Z) - Identity-driven Three-Player Generative Adversarial Network for
Synthetic-based Face Recognition [14.73254194339562]
We present a three-player generative adversarial network (GAN) framework, namely IDnet, that enables the integration of identity information into the generation process.
We empirically proved that our IDnet synthetic images are of higher identity discrimination in comparison to the conventional two-player GAN.
We demonstrated the applicability of our IDnet data in training face recognition models by evaluating these models on a wide set of face recognition benchmarks.
arXiv Detail & Related papers (2023-04-30T00:04:27Z) - Haven't I Seen You Before? Assessing Identity Leakage in Synthetic
Irises [4.142375560633827]
This paper presents analysis for three different iris matchers at varying points in the GAN training process to diagnose where and when authentic training samples are in jeopardy of leaking through the generative process.
Our results show that while most synthetic samples do not show signs of identity leakage, a handful of generated samples match authentic (training) samples nearly perfectly, with consensus across all matchers.
arXiv Detail & Related papers (2022-11-03T00:34:47Z) - High-resolution Face Swapping via Latent Semantics Disentanglement [50.23624681222619]
We present a novel high-resolution hallucination face swapping method using the inherent prior knowledge of a pre-trained GAN model.
We explicitly disentangle the latent semantics by utilizing the progressive nature of the generator.
We extend our method to video face swapping by enforcing two-temporal constraints on the latent space and the image space.
arXiv Detail & Related papers (2022-03-30T00:33:08Z) - InterFaceGAN: Interpreting the Disentangled Face Representation Learned
by GANs [73.27299786083424]
We propose a framework called InterFaceGAN to interpret the disentangled face representation learned by state-of-the-art GAN models.
We first find that GANs learn various semantics in some linear subspaces of the latent space.
We then conduct a detailed study on the correlation between different semantics and manage to better disentangle them via subspace projection.
arXiv Detail & Related papers (2020-05-18T18:01:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.