Facial Attribute Transformers for Precise and Robust Makeup Transfer
- URL: http://arxiv.org/abs/2104.02894v1
- Date: Wed, 7 Apr 2021 03:39:02 GMT
- Title: Facial Attribute Transformers for Precise and Robust Makeup Transfer
- Authors: Zhaoyi Wan, Haoran Chen, Jielei Zhang, Wentao Jiang, Cong Yao, Jiebo
Luo
- Abstract summary: We propose a novel Facial Attribute Transformer (FAT) and its variant Spatial FAT for high-quality makeup transfer.
FAT is able to model the semantic correspondences and interactions between the source face and reference face, and then precisely estimate and transfer the facial attributes.
We also integrate thin plate splines (TPS) into FAT, thus creating Spatial FAT, which is the first method that can transfer geometric attributes in addition to color and texture.
- Score: 79.41060385695977
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we address the problem of makeup transfer, which aims at
transplanting the makeup from the reference face to the source face while
preserving the identity of the source. Existing makeup transfer methods have
made notable progress in generating realistic makeup faces, but do not perform
well in terms of color fidelity and spatial transformation. To tackle these
issues, we propose a novel Facial Attribute Transformer (FAT) and its variant
Spatial FAT for high-quality makeup transfer. Drawing inspirations from the
Transformer in NLP, FAT is able to model the semantic correspondences and
interactions between the source face and reference face, and then precisely
estimate and transfer the facial attributes. To further facilitate shape
deformation and transformation of facial parts, we also integrate thin plate
splines (TPS) into FAT, thus creating Spatial FAT, which is the first method
that can transfer geometric attributes in addition to color and texture.
Extensive qualitative and quantitative experiments demonstrate the
effectiveness and superiority of our proposed FATs in the following aspects:
(1) ensuring high-fidelity color transfer; (2) allowing for geometric
transformation of facial parts; (3) handling facial variations (such as poses
and shadows) and (4) supporting high-resolution face generation.
Related papers
- DiffFAE: Advancing High-fidelity One-shot Facial Appearance Editing with Space-sensitive Customization and Semantic Preservation [84.0586749616249]
This paper presents DiffFAE, a one-stage and highly-efficient diffusion-based framework tailored for high-fidelity Facial Appearance Editing.
For high-fidelity query attributes transfer, we adopt Space-sensitive Physical Customization (SPC), which ensures the fidelity and generalization ability.
In order to preserve source attributes, we introduce the Region-responsive Semantic Composition (RSC)
This module is guided to learn decoupled source-regarding features, thereby better preserving the identity and alleviating artifacts from non-facial attributes such as hair, clothes, and background.
arXiv Detail & Related papers (2024-03-26T12:53:10Z) - Face Transformer: Towards High Fidelity and Accurate Face Swapping [54.737909435708936]
Face swapping aims to generate swapped images that fuse the identity of source faces and the attributes of target faces.
This paper presents Face Transformer, a novel face swapping network that can accurately preserve source identities and target attributes simultaneously.
arXiv Detail & Related papers (2023-04-05T15:51:44Z) - MorphGANFormer: Transformer-based Face Morphing and De-Morphing [55.211984079735196]
StyleGAN-based approaches to face morphing are among the leading techniques.
We propose a transformer-based alternative to face morphing and demonstrate its superiority to StyleGAN-based methods.
arXiv Detail & Related papers (2023-02-18T19:09:11Z) - FaceFormer: Scale-aware Blind Face Restoration with Transformers [18.514630131883536]
We propose a novel scale-aware blind face restoration framework, named FaceFormer, which formulates facial feature restoration as scale-aware transformation.
Our proposed method trained with synthetic dataset generalizes better to a natural low quality images than current state-of-the-arts.
arXiv Detail & Related papers (2022-07-20T10:08:34Z) - FaceTuneGAN: Face Autoencoder for Convolutional Expression Transfer
Using Neural Generative Adversarial Networks [0.7043489166804575]
We present FaceTuneGAN, a new 3D face model representation decomposing and encoding separately facial identity and facial expression.
We propose a first adaptation of image-to-image translation networks, that have successfully been used in the 2D domain, to 3D face geometry.
arXiv Detail & Related papers (2021-12-01T14:42:03Z) - Disentangled Lifespan Face Synthesis [100.29058545878341]
A lifespan face synthesis (LFS) model aims to generate a set of photo-realistic face images of a person's whole life, given only one snapshot as reference.
The generated face image given a target age code is expected to be age-sensitive reflected by bio-plausible transformations of shape and texture.
This is achieved by extracting shape, texture and identity features separately from an encoder.
arXiv Detail & Related papers (2021-08-05T22:33:14Z) - Towards Real-World Blind Face Restoration with Generative Facial Prior [19.080349401153097]
Blind face restoration usually relies on facial priors, such as facial geometry prior or reference prior, to restore realistic and faithful details.
We propose GFP-GAN that leverages rich and diverse priors encapsulated in a pretrained face GAN for blind face restoration.
Our method achieves superior performance to prior art on both synthetic and real-world datasets.
arXiv Detail & Related papers (2021-01-11T17:54:38Z) - Transforming Facial Weight of Real Images by Editing Latent Space of
StyleGAN [9.097538101642192]
We present an invert-and-edit framework to transform facial weight of an input face image to look thinner or heavier by leveraging semantic facial attributes encoded in the latent space of Generative Adversarial Networks (GANs)
Our framework is empirically shown to produce high-quality and realistic facial-weight transformations without requiring training GANs with a large amount of labeled face images from scratch.
Our framework can be utilized as part of an intervention to motivate individuals to make healthier food choices by visualizing the future impacts of their behavior on appearance.
arXiv Detail & Related papers (2020-11-05T01:45:18Z) - DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition [94.96686189033869]
We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
arXiv Detail & Related papers (2020-02-23T08:16:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.