DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition
- URL: http://arxiv.org/abs/2002.09859v1
- Date: Sun, 23 Feb 2020 08:16:34 GMT
- Title: DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition
- Authors: Hao-Chiang Shao, Kang-Yu Liu, Chia-Wen Lin, Jiwen Lu
- Abstract summary: We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
- Score: 94.96686189033869
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The performance of a convolutional neural network (CNN) based face
recognition model largely relies on the richness of labelled training data.
Collecting a training set with large variations of a face identity under
different poses and illumination changes, however, is very expensive, making
the diversity of within-class face images a critical issue in practice. In this
paper, we propose a 3D model-assisted domain-transferred face augmentation
network (DotFAN) that can generate a series of variants of an input face based
on the knowledge distilled from existing rich face datasets collected from
other domains. DotFAN is structurally a conditional CycleGAN but has two
additional subnetworks, namely face expert network (FEM) and face shape
regressor (FSR), for latent code control. While FSR aims to extract face
attributes, FEM is designed to capture a face identity. With their aid, DotFAN
can learn a disentangled face representation and effectively generate face
images of various facial attributes while preserving the identity of augmented
faces. Experiments show that DotFAN is beneficial for augmenting small face
datasets to improve their within-class diversity so that a better face
recognition model can be learned from the augmented dataset.
Related papers
- ID$^3$: Identity-Preserving-yet-Diversified Diffusion Models for Synthetic Face Recognition [60.15830516741776]
Synthetic face recognition (SFR) aims to generate datasets that mimic the distribution of real face data.
We introduce a diffusion-fueled SFR model termed $textID3$.
$textID3$ employs an ID-preserving loss to generate diverse yet identity-consistent facial appearances.
arXiv Detail & Related papers (2024-09-26T06:46:40Z) - Face Transformer: Towards High Fidelity and Accurate Face Swapping [54.737909435708936]
Face swapping aims to generate swapped images that fuse the identity of source faces and the attributes of target faces.
This paper presents Face Transformer, a novel face swapping network that can accurately preserve source identities and target attributes simultaneously.
arXiv Detail & Related papers (2023-04-05T15:51:44Z) - FaceDancer: Pose- and Occlusion-Aware High Fidelity Face Swapping [62.38898610210771]
We present a new single-stage method for subject face swapping and identity transfer, named FaceDancer.
We have two major contributions: Adaptive Feature Fusion Attention (AFFA) and Interpreted Feature Similarity Regularization (IFSR)
arXiv Detail & Related papers (2022-10-19T11:31:38Z) - Analyzing the Impact of Shape & Context on the Face Recognition
Performance of Deep Networks [2.0099255688059907]
We analyze how changing the underlying 3D shape of the base identity in face images can distort their overall appearance.
Our experiments demonstrate the significance of facial shape in accurate face matching and underpin the importance of contextual data for network training.
arXiv Detail & Related papers (2022-08-05T05:32:07Z) - GMFIM: A Generative Mask-guided Facial Image Manipulation Model for
Privacy Preservation [0.7734726150561088]
We propose a Generative Mask-guided Face Image Manipulation model based on GANs to apply imperceptible editing to the input face image.
Our model can achieve better performance against automated face recognition systems in comparison to the state-of-the-art methods.
arXiv Detail & Related papers (2022-01-10T14:09:14Z) - FaceTuneGAN: Face Autoencoder for Convolutional Expression Transfer
Using Neural Generative Adversarial Networks [0.7043489166804575]
We present FaceTuneGAN, a new 3D face model representation decomposing and encoding separately facial identity and facial expression.
We propose a first adaptation of image-to-image translation networks, that have successfully been used in the 2D domain, to 3D face geometry.
arXiv Detail & Related papers (2021-12-01T14:42:03Z) - Network Architecture Search for Face Enhancement [82.25775020564654]
We present a multi-task face restoration network, called Network Architecture Search for Face Enhancement (NASFE)
NASFE can enhance poor quality face images containing a single degradation (i.e. noise or blur) or multiple degradations (noise+blur+low-light)
arXiv Detail & Related papers (2021-05-13T19:46:05Z) - A 3D GAN for Improved Large-pose Facial Recognition [3.791440300377753]
Facial recognition using deep convolutional neural networks relies on the availability of large datasets of face images.
Recent studies have shown that current methods of disentangling pose from identity are inadequate.
In this work we incorporate a 3D morphable model into the generator of a GAN in order to learn a nonlinear texture model from in-the-wild images.
This allows generation of new, synthetic identities, and manipulation of pose, illumination and expression without compromising the identity.
arXiv Detail & Related papers (2020-12-18T22:41:15Z) - SuperFront: From Low-resolution to High-resolution Frontal Face
Synthesis [65.35922024067551]
We propose a generative adversarial network (GAN) -based model to generate high-quality, identity preserving frontal faces.
Specifically, we propose SuperFront-GAN to synthesize a high-resolution (HR), frontal face from one-to-many LR faces with various poses.
We integrate a super-resolution side-view module into SF-GAN to preserve identity information and fine details of the side-views in HR space.
arXiv Detail & Related papers (2020-12-07T23:30:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.