One-shot Face Reenactment Using Appearance Adaptive Normalization
- URL: http://arxiv.org/abs/2102.03984v1
- Date: Mon, 8 Feb 2021 03:36:30 GMT
- Title: One-shot Face Reenactment Using Appearance Adaptive Normalization
- Authors: Guangming Yao, Yi Yuan, Tianjia Shao, Shuang Li, Shanqi Liu, Yong Liu,
Mengmeng Wang, Kun Zhou
- Abstract summary: The paper proposes a novel generative adversarial network for one-shot face reenactment.
It can animate a single face image to a different pose-and-expression while keeping its original appearance.
- Score: 30.615671641713945
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The paper proposes a novel generative adversarial network for one-shot face
reenactment, which can animate a single face image to a different
pose-and-expression (provided by a driving image) while keeping its original
appearance. The core of our network is a novel mechanism called appearance
adaptive normalization, which can effectively integrate the appearance
information from the input image into our face generator by modulating the
feature maps of the generator using the learned adaptive parameters.
Furthermore, we specially design a local net to reenact the local facial
components (i.e., eyes, nose and mouth) first, which is a much easier task for
the network to learn and can in turn provide explicit anchors to guide our face
generator to learn the global appearance and pose-and-expression. Extensive
quantitative and qualitative experiments demonstrate the significant efficacy
of our model compared with prior one-shot methods.
Related papers
- One-shot Neural Face Reenactment via Finding Directions in GAN's Latent
Space [37.357842761713705]
We present a framework for neural face/head reenactment whose goal is to transfer the 3D head orientation and expression of a target face to a source face.
Our method features several favorable properties including using a single source image (one-shot) and enabling cross-person reenactment.
arXiv Detail & Related papers (2024-02-05T22:12:42Z) - Optimal-Landmark-Guided Image Blending for Face Morphing Attacks [8.024953195407502]
We propose a novel approach for conducting face morphing attacks, which utilizes optimal-landmark-guided image blending.
Our proposed method overcomes the limitations of previous approaches by optimizing the morphing landmarks and using Graph Convolutional Networks (GCNs) to combine landmark and appearance features.
arXiv Detail & Related papers (2024-01-30T03:45:06Z) - Appearance Debiased Gaze Estimation via Stochastic Subject-Wise
Adversarial Learning [33.55397868171977]
Appearance-based gaze estimation has been attracting attention in computer vision, and remarkable improvements have been achieved using various deep learning techniques.
We propose a novel framework: subject-wise gaZE learning (SAZE), which trains a network to generalize the appearance of subjects.
Our experimental results verify the robustness of the method in that it yields state-of-the-art performance, achieving 3.89 and 4.42 on the MPIIGaze and EyeDiap datasets, respectively.
arXiv Detail & Related papers (2024-01-25T00:23:21Z) - Effective Adapter for Face Recognition in the Wild [72.75516495170199]
We tackle the challenge of face recognition in the wild, where images often suffer from low quality and real-world distortions.
Traditional approaches-either training models directly on degraded images or their enhanced counterparts using face restoration techniques-have proven ineffective.
We propose an effective adapter for augmenting existing face recognition models trained on high-quality facial datasets.
arXiv Detail & Related papers (2023-12-04T08:55:46Z) - DreamIdentity: Improved Editability for Efficient Face-identity
Preserved Image Generation [69.16517915592063]
We propose a novel face-identity encoder to learn an accurate representation of human faces.
We also propose self-augmented editability learning to enhance the editability of models.
Our methods can generate identity-preserved images under different scenes at a much faster speed.
arXiv Detail & Related papers (2023-07-01T11:01:17Z) - Finding Directions in GAN's Latent Space for Neural Face Reenactment [45.67273942952348]
This paper is on face/head reenactment where the goal is to transfer the facial pose (3D head orientation and expression) of a target face to a source face.
We take a different approach, bypassing the training of such networks, by using (fine-tuned) pre-trained GANs.
We show that by embedding real images in the GAN latent space, our method can be successfully used for the reenactment of real-world faces.
arXiv Detail & Related papers (2022-01-31T19:14:03Z) - TANet: A new Paradigm for Global Face Super-resolution via
Transformer-CNN Aggregation Network [72.41798177302175]
We propose a novel paradigm based on the self-attention mechanism (i.e., the core of Transformer) to fully explore the representation capacity of the facial structure feature.
Specifically, we design a Transformer-CNN aggregation network (TANet) consisting of two paths, in which one path uses CNNs responsible for restoring fine-grained facial details.
By aggregating the features from the above two paths, the consistency of global facial structure and fidelity of local facial detail restoration are strengthened simultaneously.
arXiv Detail & Related papers (2021-09-16T18:15:07Z) - S2FGAN: Semantically Aware Interactive Sketch-to-Face Translation [11.724779328025589]
This paper proposes a sketch-to-image generation framework called S2FGAN.
We employ two latent spaces to control the face appearance and adjust the desired attributes of the generated face.
Our method successfully outperforms state-of-the-art methods on attribute manipulation by exploiting greater control of attribute intensity.
arXiv Detail & Related papers (2020-11-30T13:42:39Z) - Generating Person Images with Appearance-aware Pose Stylizer [66.44220388377596]
We present a novel end-to-end framework to generate realistic person images based on given person poses and appearances.
The core of our framework is a novel generator called Appearance-aware Pose Stylizer (APS) which generates human images by coupling the target pose with the conditioned person appearance progressively.
arXiv Detail & Related papers (2020-07-17T15:58:05Z) - DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition [94.96686189033869]
We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
arXiv Detail & Related papers (2020-02-23T08:16:34Z) - Exploiting Semantics for Face Image Deblurring [121.44928934662063]
We propose an effective and efficient face deblurring algorithm by exploiting semantic cues via deep convolutional neural networks.
We incorporate face semantic labels as input priors and propose an adaptive structural loss to regularize facial local structures.
The proposed method restores sharp images with more accurate facial features and details.
arXiv Detail & Related papers (2020-01-19T13:06:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.