AvatarMakeup: Realistic Makeup Transfer for 3D Animatable Head Avatars
- URL: http://arxiv.org/abs/2507.02419v2
- Date: Mon, 07 Jul 2025 11:53:32 GMT
- Title: AvatarMakeup: Realistic Makeup Transfer for 3D Animatable Head Avatars
- Authors: Yiming Zhong, Xiaolin Zhang, Ligang Liu, Yao Zhao, Yunchao Wei,
- Abstract summary: AvatarMakeup achieves state-of-the-art makeup transfer quality and consistency throughout animation.<n>Coherent Duplication optimize a global UV map by recoding the averaged facial attributes among the generated makeup images.<n>Experiments demonstrate that AvatarMakeup achieves state-of-the-art makeup transfer quality and consistency throughout animation.
- Score: 89.31582684550723
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Similar to facial beautification in real life, 3D virtual avatars require personalized customization to enhance their visual appeal, yet this area remains insufficiently explored. Although current 3D Gaussian editing methods can be adapted for facial makeup purposes, these methods fail to meet the fundamental requirements for achieving realistic makeup effects: 1) ensuring a consistent appearance during drivable expressions, 2) preserving the identity throughout the makeup process, and 3) enabling precise control over fine details. To address these, we propose a specialized 3D makeup method named AvatarMakeup, leveraging a pretrained diffusion model to transfer makeup patterns from a single reference photo of any individual. We adopt a coarse-to-fine idea to first maintain the consistent appearance and identity, and then to refine the details. In particular, the diffusion model is employed to generate makeup images as supervision. Due to the uncertainties in diffusion process, the generated images are inconsistent across different viewpoints and expressions. Therefore, we propose a Coherent Duplication method to coarsely apply makeup to the target while ensuring consistency across dynamic and multiview effects. Coherent Duplication optimizes a global UV map by recoding the averaged facial attributes among the generated makeup images. By querying the global UV map, it easily synthesizes coherent makeup guidance from arbitrary views and expressions to optimize the target avatar. Given the coarse makeup avatar, we further enhance the makeup by incorporating a Refinement Module into the diffusion model to achieve high makeup quality. Experiments demonstrate that AvatarMakeup achieves state-of-the-art makeup transfer quality and consistency throughout animation.
Related papers
- FFHQ-Makeup: Paired Synthetic Makeup Dataset with Facial Consistency Across Multiple Styles [1.4680035572775534]
We present FFHQ-Makeup, a high-quality synthetic makeup dataset that pairs each identity with multiple makeup styles.<n>To the best of our knowledge, this is the first work that focuses specifically on constructing a makeup dataset.
arXiv Detail & Related papers (2025-08-05T09:16:43Z) - BeautyBank: Encoding Facial Makeup in Latent Space [2.113770213797994]
We propose BeautyBank, a novel makeup encoder that disentangles pattern features of bare and makeup faces.
Our method encodes makeup features into a high-dimensional space, preserving essential details necessary for makeup reconstruction.
We also propose a Progressive Makeup Tuning (PMT) strategy, specifically designed to enhance the preservation of detailed makeup features.
arXiv Detail & Related papers (2024-11-18T01:52:31Z) - Gorgeous: Create Your Desired Character Facial Makeup from Any Ideas [9.604390113485834]
$Gorgeous$ is a novel diffusion-based makeup application method.
It does not require the presence of a face in the reference images.
$Gorgeous$ can effectively generate distinctive character facial makeup inspired by the chosen thematic reference images.
arXiv Detail & Related papers (2024-04-22T07:40:53Z) - Stable-Makeup: When Real-World Makeup Transfer Meets Diffusion Model [15.380297080210559]
Current makeup transfer methods are limited to simple makeup styles, making them difficult to apply in real-world scenarios.<n>We introduce Stable-Makeup, a novel diffusion-based makeup transfer method capable of robustly transferring a wide range of real-world makeup.
arXiv Detail & Related papers (2024-03-12T15:53:14Z) - FitMe: Deep Photorealistic 3D Morphable Model Avatars [119.03325450951074]
We introduce FitMe, a facial reflectance model and a differentiable rendering pipeline.
FitMe achieves state-of-the-art reflectance acquisition and identity preservation on single "in-the-wild" facial images.
In contrast with recent implicit avatar reconstructions, FitMe requires only one minute and produces relightable mesh and texture-based avatars.
arXiv Detail & Related papers (2023-05-16T17:42:45Z) - Single-Shot Implicit Morphable Faces with Consistent Texture
Parameterization [91.52882218901627]
We propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing.
Our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T17:58:40Z) - Makeup Extraction of 3D Representation via Illumination-Aware Image
Decomposition [4.726777092009553]
This paper presents the first method for extracting makeup for 3D facial models from a single makeup portrait.
We exploit the strong prior of 3D morphable models via regression-based inverse rendering to extract coarse materials.
Our method offers various applications for not only 3D facial models but also 2D portrait images.
arXiv Detail & Related papers (2023-02-26T09:48:57Z) - DRAN: Detailed Region-Adaptive Normalization for Conditional Image
Synthesis [25.936764522125703]
We propose a novel normalization module, named Detailed Region-Adaptive Normalization(DRAN)
It adaptively learns both fine-grained and coarse-grained style representations.
We collect a new makeup dataset (Makeup-Complex dataset) that contains a wide range of complex makeup styles.
arXiv Detail & Related papers (2021-09-29T16:19:37Z) - PSGAN++: Robust Detail-Preserving Makeup Transfer and Removal [176.47249346856393]
PSGAN++ is capable of performing both detail-preserving makeup transfer and effective makeup removal.
For makeup transfer, PSGAN++ uses a Makeup Distill Network to extract makeup information.
For makeup removal, PSGAN++ applies an Identity Distill Network to embed the identity information from with-makeup images into identity matrices.
arXiv Detail & Related papers (2021-05-26T04:37:57Z) - SOGAN: 3D-Aware Shadow and Occlusion Robust GAN for Makeup Transfer [68.38955698584758]
We propose a novel makeup transfer method called 3D-Aware Shadow and Occlusion Robust GAN (SOGAN)
We first fit a 3D face model and then disentangle the faces into shape and texture.
In the texture branch, we map the texture to the UV space and design a UV texture generator to transfer the makeup.
arXiv Detail & Related papers (2021-04-21T14:48:49Z) - Cosmetic-Aware Makeup Cleanser [109.41917954315784]
Face verification aims at determining whether a pair of face images belongs to the same identity.
Recent studies have revealed the negative impact of facial makeup on the verification performance.
This paper proposes a semanticaware makeup cleanser (SAMC) to remove facial makeup under different poses and expressions.
arXiv Detail & Related papers (2020-04-20T09:18:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.