Dual-Attention GAN for Large-Pose Face Frontalization
- URL: http://arxiv.org/abs/2002.07227v1
- Date: Mon, 17 Feb 2020 20:00:56 GMT
- Title: Dual-Attention GAN for Large-Pose Face Frontalization
- Authors: Yu Yin and Songyao Jiang and Joseph P. Robinson and Yun Fu
- Abstract summary: We present a novel Dual-Attention Generative Adversarial Network (DA-GAN) for photo-realistic face frontalization.
Specifically, a self-attention-based generator is introduced to integrate local features with their long-range dependencies.
A novel face-attention-based discriminator is applied to emphasize local features of face regions.
- Score: 59.689836951934694
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face frontalization provides an effective and efficient way for face data
augmentation and further improves the face recognition performance in extreme
pose scenario. Despite recent advances in deep learning-based face synthesis
approaches, this problem is still challenging due to significant pose and
illumination discrepancy. In this paper, we present a novel Dual-Attention
Generative Adversarial Network (DA-GAN) for photo-realistic face frontalization
by capturing both contextual dependencies and local consistency during GAN
training. Specifically, a self-attention-based generator is introduced to
integrate local features with their long-range dependencies yielding better
feature representations, and hence generate faces that preserve identities
better, especially for larger pose angles. Moreover, a novel
face-attention-based discriminator is applied to emphasize local features of
face regions, and hence reinforce the realism of synthetic frontal faces.
Guided by semantic segmentation, four independent discriminators are used to
distinguish between different aspects of a face (\ie skin, keypoints, hairline,
and frontalized face). By introducing these two complementary attention
mechanisms in generator and discriminator separately, we can learn a richer
feature representation and generate identity preserving inference of frontal
views with much finer details (i.e., more accurate facial appearance and
textures) comparing to the state-of-the-art. Quantitative and qualitative
experimental results demonstrate the effectiveness and efficiency of our DA-GAN
approach.
Related papers
- LAFS: Landmark-based Facial Self-supervised Learning for Face
Recognition [37.4550614524874]
We focus on learning facial representations that can be adapted to train effective face recognition models.
We explore the learning strategy of unlabeled facial images through self-supervised pretraining.
Our method achieves significant improvement over the state-of-the-art on multiple face recognition benchmarks.
arXiv Detail & Related papers (2024-03-13T01:07:55Z) - Kinship Representation Learning with Face Componential Relation [19.175823975322356]
Kinship recognition aims to determine whether the subjects in two facial images are kin or non-kin.
Most previous methods focus on designs without considering the spatial correlation between face images.
We propose the Face Componential Relation Network, which learns the relationship between face components among images with a cross-attention mechanism.
The proposed FaCoRNet outperforms previous state-of-the-art methods by large margins for the largest public kinship recognition FIW benchmark.
arXiv Detail & Related papers (2023-04-10T12:37:26Z) - FaceDancer: Pose- and Occlusion-Aware High Fidelity Face Swapping [62.38898610210771]
We present a new single-stage method for subject face swapping and identity transfer, named FaceDancer.
We have two major contributions: Adaptive Feature Fusion Attention (AFFA) and Interpreted Feature Similarity Regularization (IFSR)
arXiv Detail & Related papers (2022-10-19T11:31:38Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Heterogeneous Face Frontalization via Domain Agnostic Learning [74.86585699909459]
We propose a domain agnostic learning-based generative adversarial network (DAL-GAN) which can synthesize frontal views in the visible domain from thermal faces with pose variations.
DAL-GAN consists of a generator with an auxiliary classifier and two discriminators which capture both local and global texture discriminations for better synthesis.
arXiv Detail & Related papers (2021-07-17T20:41:41Z) - Learning Oracle Attention for High-fidelity Face Completion [121.72704525675047]
We design a comprehensive framework for face completion based on the U-Net structure.
We propose a dual spatial attention module to efficiently learn the correlations between facial textures at multiple scales.
We take the location of the facial components as prior knowledge and impose a multi-discriminator on these regions.
arXiv Detail & Related papers (2020-03-31T01:37:10Z) - DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition [94.96686189033869]
We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
arXiv Detail & Related papers (2020-02-23T08:16:34Z) - Domain Embedded Multi-model Generative Adversarial Networks for
Image-based Face Inpainting [44.598234654270584]
We present a domain embedded multi-model generative adversarial model for inpainting of face images with large cropped regions.
Experiments on both CelebA and CelebA-HQ face datasets demonstrate that our proposed approach achieved state-of-the-art performance.
arXiv Detail & Related papers (2020-02-05T17:36:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.