SuperFront: From Low-resolution to High-resolution Frontal Face
Synthesis
- URL: http://arxiv.org/abs/2012.04111v1
- Date: Mon, 7 Dec 2020 23:30:28 GMT
- Title: SuperFront: From Low-resolution to High-resolution Frontal Face
Synthesis
- Authors: Yu Yin, Joseph P. Robinson, Songyao Jiang, Yue Bai, Can Qin, Yun Fu
- Abstract summary: We propose a generative adversarial network (GAN) -based model to generate high-quality, identity preserving frontal faces.
Specifically, we propose SuperFront-GAN to synthesize a high-resolution (HR), frontal face from one-to-many LR faces with various poses.
We integrate a super-resolution side-view module into SF-GAN to preserve identity information and fine details of the side-views in HR space.
- Score: 65.35922024067551
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advances in face rotation, along with other face-based generative tasks, are
more frequent as we advance further in topics of deep learning. Even as
impressive milestones are achieved in synthesizing faces, the importance of
preserving identity is needed in practice and should not be overlooked. Also,
the difficulty should not be more for data with obscured faces, heavier poses,
and lower quality. Existing methods tend to focus on samples with variation in
pose, but with the assumption data is high in quality. We propose a generative
adversarial network (GAN) -based model to generate high-quality, identity
preserving frontal faces from one or multiple low-resolution (LR) faces with
extreme poses. Specifically, we propose SuperFront-GAN (SF-GAN) to synthesize a
high-resolution (HR), frontal face from one-to-many LR faces with various poses
and with the identity-preserved. We integrate a super-resolution (SR) side-view
module into SF-GAN to preserve identity information and fine details of the
side-views in HR space, which helps model reconstruct high-frequency
information of faces (i.e., periocular, nose, and mouth regions). Moreover,
SF-GAN accepts multiple LR faces as input, and improves each added sample. We
squeeze additional gain in performance with an orthogonal constraint in the
generator to penalize redundant latent representations and, hence, diversify
the learned features space. Quantitative and qualitative results demonstrate
the superiority of SF-GAN over others.
Related papers
- CLR-Face: Conditional Latent Refinement for Blind Face Restoration Using
Score-Based Diffusion Models [57.9771859175664]
Recent generative-prior-based methods have shown promising blind face restoration performance.
Generating fine-grained facial details faithful to inputs remains a challenging problem.
We introduce a diffusion-based-prior inside a VQGAN architecture that focuses on learning the distribution over uncorrupted latent embeddings.
arXiv Detail & Related papers (2024-02-08T23:51:49Z) - Generalized Face Liveness Detection via De-spoofing Face Generator [58.7043386978171]
Previous Face Anti-spoofing (FAS) works face the challenge of generalizing in unseen domains.
We conduct an Anomalous cue Guided FAS (AG-FAS) method, which leverages real faces for improving model generalization via a De-spoofing Face Generator (DFG)
We then propose an Anomalous cue Guided FAS feature extraction Network (AG-Net) to further improve the FAS feature generalization via a cross-attention transformer.
arXiv Detail & Related papers (2024-01-17T06:59:32Z) - FaceDancer: Pose- and Occlusion-Aware High Fidelity Face Swapping [62.38898610210771]
We present a new single-stage method for subject face swapping and identity transfer, named FaceDancer.
We have two major contributions: Adaptive Feature Fusion Attention (AFFA) and Interpreted Feature Similarity Regularization (IFSR)
arXiv Detail & Related papers (2022-10-19T11:31:38Z) - Joint Face Image Restoration and Frontalization for Recognition [79.78729632975744]
In real-world scenarios, many factors may harm face recognition performance, e.g., large pose, bad illumination,low resolution, blur and noise.
Previous efforts usually first restore the low-quality faces to high-quality ones and then perform face recognition.
We propose an Multi-Degradation Face Restoration model to restore frontalized high-quality faces from the given low-quality ones.
arXiv Detail & Related papers (2021-05-12T03:52:41Z) - DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition [94.96686189033869]
We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
arXiv Detail & Related papers (2020-02-23T08:16:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.