MorphGAN: One-Shot Face Synthesis GAN for Detecting Recognition Bias
- URL: http://arxiv.org/abs/2012.05225v2
- Date: Thu, 10 Dec 2020 18:48:22 GMT
- Title: MorphGAN: One-Shot Face Synthesis GAN for Detecting Recognition Bias
- Authors: Nataniel Ruiz, Barry-John Theobald, Anurag Ranjan, Ahmed Hussein
Abdelaziz, Nicholas Apostoloff
- Abstract summary: We describe a simulator that applies specific head pose and facial expression adjustments to images of previously unseen people.
We show that by augmenting small datasets of faces with new poses and expressions improves the recognition performance by up to 9% depending on the augmentation and data scarcity.
- Score: 13.162012586770576
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To detect bias in face recognition networks, it can be useful to probe a
network under test using samples in which only specific attributes vary in some
controlled way. However, capturing a sufficiently large dataset with specific
control over the attributes of interest is difficult. In this work, we describe
a simulator that applies specific head pose and facial expression adjustments
to images of previously unseen people. The simulator first fits a 3D morphable
model to a provided image, applies the desired head pose and facial expression
controls, then renders the model into an image. Next, a conditional Generative
Adversarial Network (GAN) conditioned on the original image and the rendered
morphable model is used to produce the image of the original person with the
new facial expression and head pose. We call this conditional GAN -- MorphGAN.
Images generated using MorphGAN conserve the identity of the person in the
original image, and the provided control over head pose and facial expression
allows test sets to be created to identify robustness issues of a facial
recognition deep network with respect to pose and expression. Images generated
by MorphGAN can also serve as data augmentation when training data are scarce.
We show that by augmenting small datasets of faces with new poses and
expressions improves the recognition performance by up to 9% depending on the
augmentation and data scarcity.
Related papers
- Face Feature Visualisation of Single Morphing Attack Detection [13.680968065638108]
This paper proposes an explainable visualisation of different face feature extraction algorithms.
It enables the detection of bona fide and morphing images for single morphing attack detection.
The visualisation may help to develop a Graphical User Interface for border policies.
arXiv Detail & Related papers (2023-04-25T17:51:23Z) - SARGAN: Spatial Attention-based Residuals for Facial Expression
Manipulation [1.7056768055368383]
We present a novel method named SARGAN that addresses the limitations from three perspectives.
We exploited a symmetric encoder-decoder network to attend facial features at multiple scales.
Our proposed model performs significantly better than state-of-the-art methods.
arXiv Detail & Related papers (2023-03-30T08:15:18Z) - Disentangling Identity and Pose for Facial Expression Recognition [54.50747989860957]
We propose an identity and pose disentangled facial expression recognition (IPD-FER) model to learn more discriminative feature representation.
For identity encoder, a well pre-trained face recognition model is utilized and fixed during training, which alleviates the restriction on specific expression training data.
By comparing the difference between synthesized neutral and expressional images of the same individual, the expression component is further disentangled from identity and pose.
arXiv Detail & Related papers (2022-08-17T06:48:13Z) - Free-HeadGAN: Neural Talking Head Synthesis with Explicit Gaze Control [54.079327030892244]
Free-HeadGAN is a person-generic neural talking head synthesis system.
We show that modeling faces with sparse 3D facial landmarks are sufficient for achieving state-of-the-art generative performance.
arXiv Detail & Related papers (2022-08-03T16:46:08Z) - Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo
Collection [65.92058628082322]
Non-parametric face modeling aims to reconstruct 3D face only from images without shape assumptions.
This paper presents a novel Learning to Aggregate and Personalize framework for unsupervised robust 3D face modeling.
arXiv Detail & Related papers (2021-06-15T03:10:17Z) - A 3D GAN for Improved Large-pose Facial Recognition [3.791440300377753]
Facial recognition using deep convolutional neural networks relies on the availability of large datasets of face images.
Recent studies have shown that current methods of disentangling pose from identity are inadequate.
In this work we incorporate a 3D morphable model into the generator of a GAN in order to learn a nonlinear texture model from in-the-wild images.
This allows generation of new, synthetic identities, and manipulation of pose, illumination and expression without compromising the identity.
arXiv Detail & Related papers (2020-12-18T22:41:15Z) - InterFaceGAN: Interpreting the Disentangled Face Representation Learned
by GANs [73.27299786083424]
We propose a framework called InterFaceGAN to interpret the disentangled face representation learned by state-of-the-art GAN models.
We first find that GANs learn various semantics in some linear subspaces of the latent space.
We then conduct a detailed study on the correlation between different semantics and manage to better disentangle them via subspace projection.
arXiv Detail & Related papers (2020-05-18T18:01:22Z) - Pose Manipulation with Identity Preservation [0.0]
We introduce Character Adaptive Identity Normalization GAN (CainGAN) which uses spatial characteristic features extracted by an embedder and combined across source images.
CainGAN receives figures of faces from a certain individual and produces new ones while preserving the person's identity.
Experimental results show that the quality of generated images scales with the size of the input set used during inference.
arXiv Detail & Related papers (2020-04-20T09:51:31Z) - VAE/WGAN-Based Image Representation Learning For Pose-Preserving
Seamless Identity Replacement In Facial Images [15.855376604558977]
We present a novel variational generative adversarial network (VGAN) based on Wasserstein loss.
We show that our network can be used to perform pose-preserving identity morphing and identity-preserving pose morphing.
arXiv Detail & Related papers (2020-03-02T03:35:59Z) - DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition [94.96686189033869]
We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
arXiv Detail & Related papers (2020-02-23T08:16:34Z) - Fine-grained Image-to-Image Transformation towards Visual Recognition [102.51124181873101]
We aim at transforming an image with a fine-grained category to synthesize new images that preserve the identity of the input image.
We adopt a model based on generative adversarial networks to disentangle the identity related and unrelated factors of an image.
Experiments on the CompCars and Multi-PIE datasets demonstrate that our model preserves the identity of the generated images much better than the state-of-the-art image-to-image transformation models.
arXiv Detail & Related papers (2020-01-12T05:26:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.