Makeup Extraction of 3D Representation via Illumination-Aware Image
Decomposition
- URL: http://arxiv.org/abs/2302.13279v1
- Date: Sun, 26 Feb 2023 09:48:57 GMT
- Title: Makeup Extraction of 3D Representation via Illumination-Aware Image
Decomposition
- Authors: Xingchao Yang, Takafumi Taketomi, Yoshihiro Kanamori
- Abstract summary: This paper presents the first method for extracting makeup for 3D facial models from a single makeup portrait.
We exploit the strong prior of 3D morphable models via regression-based inverse rendering to extract coarse materials.
Our method offers various applications for not only 3D facial models but also 2D portrait images.
- Score: 4.726777092009553
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Facial makeup enriches the beauty of not only real humans but also virtual
characters; therefore, makeup for 3D facial models is highly in demand in
productions. However, painting directly on 3D faces and capturing real-world
makeup are costly, and extracting makeup from 2D images often struggles with
shading effects and occlusions. This paper presents the first method for
extracting makeup for 3D facial models from a single makeup portrait. Our
method consists of the following three steps. First, we exploit the strong
prior of 3D morphable models via regression-based inverse rendering to extract
coarse materials such as geometry and diffuse/specular albedos that are
represented in the UV space. Second, we refine the coarse materials, which may
have missing pixels due to occlusions. We apply inpainting and optimization.
Finally, we extract the bare skin, makeup, and an alpha matte from the diffuse
albedo. Our method offers various applications for not only 3D facial models
but also 2D portrait images. The extracted makeup is well-aligned in the UV
space, from which we build a large-scale makeup dataset and a parametric makeup
model for 3D faces. Our disentangled materials also yield robust makeup
transfer and illumination-aware makeup interpolation/removal without a
reference image.
Related papers
- FitDiff: Robust monocular 3D facial shape and reflectance estimation using Diffusion Models [79.65289816077629]
We present FitDiff, a diffusion-based 3D facial avatar generative model.
Our model accurately generates relightable facial avatars, utilizing an identity embedding extracted from an "in-the-wild" 2D facial image.
Being the first 3D LDM conditioned on face recognition embeddings, FitDiff reconstructs relightable human avatars, that can be used as-is in common rendering engines.
arXiv Detail & Related papers (2023-12-07T17:35:49Z) - FaceLit: Neural 3D Relightable Faces [28.0806453092185]
FaceLit is capable of generating a 3D face that can be rendered at various user-defined lighting conditions and views.
We show state-of-the-art photorealism among 3D aware GANs on FFHQ dataset achieving an FID score of 3.5.
arXiv Detail & Related papers (2023-03-27T17:59:10Z) - BareSkinNet: De-makeup and De-lighting via 3D Face Reconstruction [2.741266294612776]
We propose BareSkinNet, a novel method that simultaneously removes makeup and lighting influences from the face image.
By combining the process of 3D face reconstruction, we can easily obtain 3D geometry and coarse 3D textures.
In experiments, we show that BareSkinNet outperforms state-of-the-art makeup removal methods.
arXiv Detail & Related papers (2022-09-19T14:02:03Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - 3DFaceFill: An Analysis-By-Synthesis Approach to Face Completion [2.0305676256390934]
3DFaceFill is an analysis-by-synthesis approach for face completion that explicitly considers the image formation process.
It comprises three components, (1) an encoder that disentangles the face into its constituent 3D mesh, 3D pose, illumination and albedo factors, (2) an autoencoder that inpaints the UV representation of facial albedo, and (3) an autoencoder that resynthesizes the completed face.
arXiv Detail & Related papers (2021-10-20T06:31:47Z) - PSGAN++: Robust Detail-Preserving Makeup Transfer and Removal [176.47249346856393]
PSGAN++ is capable of performing both detail-preserving makeup transfer and effective makeup removal.
For makeup transfer, PSGAN++ uses a Makeup Distill Network to extract makeup information.
For makeup removal, PSGAN++ applies an Identity Distill Network to embed the identity information from with-makeup images into identity matrices.
arXiv Detail & Related papers (2021-05-26T04:37:57Z) - SOGAN: 3D-Aware Shadow and Occlusion Robust GAN for Makeup Transfer [68.38955698584758]
We propose a novel makeup transfer method called 3D-Aware Shadow and Occlusion Robust GAN (SOGAN)
We first fit a 3D face model and then disentangle the faces into shape and texture.
In the texture branch, we map the texture to the UV space and design a UV texture generator to transfer the makeup.
arXiv Detail & Related papers (2021-04-21T14:48:49Z) - OSTeC: One-Shot Texture Completion [86.23018402732748]
We propose an unsupervised approach for one-shot 3D facial texture completion.
The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator.
We frontalize the target image by projecting the completed texture into the generator.
arXiv Detail & Related papers (2020-12-30T23:53:26Z) - AvatarMe: Realistically Renderable 3D Facial Reconstruction
"in-the-wild" [105.28776215113352]
AvatarMe is the first method that is able to reconstruct photorealistic 3D faces from a single "in-the-wild" image with an increasing level of detail.
It outperforms the existing arts by a significant margin and reconstructs authentic, 4K by 6K-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2020-03-30T22:17:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.