SOGAN: 3D-Aware Shadow and Occlusion Robust GAN for Makeup Transfer
- URL: http://arxiv.org/abs/2104.10567v1
- Date: Wed, 21 Apr 2021 14:48:49 GMT
- Title: SOGAN: 3D-Aware Shadow and Occlusion Robust GAN for Makeup Transfer
- Authors: Yueming Lyu, Jing Dong, Bo Peng, Wei Wang, Tieniu Tan
- Abstract summary: We propose a novel makeup transfer method called 3D-Aware Shadow and Occlusion Robust GAN (SOGAN)
We first fit a 3D face model and then disentangle the faces into shape and texture.
In the texture branch, we map the texture to the UV space and design a UV texture generator to transfer the makeup.
- Score: 68.38955698584758
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, virtual makeup applications have become more and more
popular. However, it is still challenging to propose a robust makeup transfer
method in the real-world environment. Current makeup transfer methods mostly
work well on good-conditioned clean makeup images, but transferring makeup that
exhibits shadow and occlusion is not satisfying. To alleviate it, we propose a
novel makeup transfer method, called 3D-Aware Shadow and Occlusion Robust GAN
(SOGAN). Given the source and the reference faces, we first fit a 3D face model
and then disentangle the faces into shape and texture. In the texture branch,
we map the texture to the UV space and design a UV texture generator to
transfer the makeup. Since human faces are symmetrical in the UV space, we can
conveniently remove the undesired shadow and occlusion from the reference image
by carefully designing a Flip Attention Module (FAM). After obtaining cleaner
makeup features from the reference image, a Makeup Transfer Module (MTM) is
introduced to perform accurate makeup transfer. The qualitative and
quantitative experiments demonstrate that our SOGAN not only achieves superior
results in shadow and occlusion situations but also performs well in large pose
and expression variations.
Related papers
- Relightify: Relightable 3D Faces from a Single Image via Diffusion
Models [86.3927548091627]
We present the first approach to use diffusion models as a prior for highly accurate 3D facial BRDF reconstruction from a single image.
In contrast to existing methods, we directly acquire the observed texture from the input image, thus, resulting in more faithful and consistent estimation.
arXiv Detail & Related papers (2023-05-10T11:57:49Z) - Makeup Extraction of 3D Representation via Illumination-Aware Image
Decomposition [4.726777092009553]
This paper presents the first method for extracting makeup for 3D facial models from a single makeup portrait.
We exploit the strong prior of 3D morphable models via regression-based inverse rendering to extract coarse materials.
Our method offers various applications for not only 3D facial models but also 2D portrait images.
arXiv Detail & Related papers (2023-02-26T09:48:57Z) - BareSkinNet: De-makeup and De-lighting via 3D Face Reconstruction [2.741266294612776]
We propose BareSkinNet, a novel method that simultaneously removes makeup and lighting influences from the face image.
By combining the process of 3D face reconstruction, we can easily obtain 3D geometry and coarse 3D textures.
In experiments, we show that BareSkinNet outperforms state-of-the-art makeup removal methods.
arXiv Detail & Related papers (2022-09-19T14:02:03Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - PSGAN++: Robust Detail-Preserving Makeup Transfer and Removal [176.47249346856393]
PSGAN++ is capable of performing both detail-preserving makeup transfer and effective makeup removal.
For makeup transfer, PSGAN++ uses a Makeup Distill Network to extract makeup information.
For makeup removal, PSGAN++ applies an Identity Distill Network to embed the identity information from with-makeup images into identity matrices.
arXiv Detail & Related papers (2021-05-26T04:37:57Z) - Facial Attribute Transformers for Precise and Robust Makeup Transfer [79.41060385695977]
We propose a novel Facial Attribute Transformer (FAT) and its variant Spatial FAT for high-quality makeup transfer.
FAT is able to model the semantic correspondences and interactions between the source face and reference face, and then precisely estimate and transfer the facial attributes.
We also integrate thin plate splines (TPS) into FAT, thus creating Spatial FAT, which is the first method that can transfer geometric attributes in addition to color and texture.
arXiv Detail & Related papers (2021-04-07T03:39:02Z) - Lipstick ain't enough: Beyond Color Matching for In-the-Wild Makeup
Transfer [20.782984081934213]
We propose a holistic makeup transfer framework that can handle all the mentioned makeup components.
It consists of an improved color transfer branch and a novel pattern transfer branch to learn all makeup properties.
Our framework achieves the state of the art performance on both light and extreme makeup styles.
arXiv Detail & Related papers (2021-04-05T12:12:56Z) - OSTeC: One-Shot Texture Completion [86.23018402732748]
We propose an unsupervised approach for one-shot 3D facial texture completion.
The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator.
We frontalize the target image by projecting the completed texture into the generator.
arXiv Detail & Related papers (2020-12-30T23:53:26Z) - MakeupBag: Disentangling Makeup Extraction and Application [0.0]
MakeupBag is a novel method for automatic makeup style transfer.
It allows customization and pixel specific modification of the extracted makeup style.
In a comparative analysis, MakeupBag is shown to outperform current state-of-the-art approaches.
arXiv Detail & Related papers (2020-12-03T18:44:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.