EvoMakeup: High-Fidelity and Controllable Makeup Editing with MakeupQuad
- URL: http://arxiv.org/abs/2508.05994v1
- Date: Fri, 08 Aug 2025 04:00:45 GMT
- Title: EvoMakeup: High-Fidelity and Controllable Makeup Editing with MakeupQuad
- Authors: Huadong Wu, Yi Fu, Yunhao Li, Yuan Gao, Kang Du,
- Abstract summary: We introduce MakeupQuad, a large-scale, high-quality dataset with non-makeup faces, references, edited results, and textual makeup descriptions.<n>We propose EvoMakeup, a unified training framework that mitigates image degradation during multi-stage distillation.<n>Although trained solely on synthetic data, EvoMakeup generalizes well and outperforms prior methods on real-world benchmarks.
- Score: 7.6522194221211235
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Facial makeup editing aims to realistically transfer makeup from a reference to a target face. Existing methods often produce low-quality results with coarse makeup details and struggle to preserve both identity and makeup fidelity, mainly due to the lack of structured paired data -- where source and result share identity, and reference and result share identical makeup. To address this, we introduce MakeupQuad, a large-scale, high-quality dataset with non-makeup faces, references, edited results, and textual makeup descriptions. Building on this, we propose EvoMakeup, a unified training framework that mitigates image degradation during multi-stage distillation, enabling iterative improvement of both data and model quality. Although trained solely on synthetic data, EvoMakeup generalizes well and outperforms prior methods on real-world benchmarks. It supports high-fidelity, controllable, multi-task makeup editing -- including full-face and partial reference-based editing, as well as text-driven makeup editing -- within a single model. Experimental results demonstrate that our method achieves superior makeup fidelity and identity preservation, effectively balancing both aspects. Code and dataset will be released upon acceptance.
Related papers
- FFHQ-Makeup: Paired Synthetic Makeup Dataset with Facial Consistency Across Multiple Styles [1.4680035572775534]
We present FFHQ-Makeup, a high-quality synthetic makeup dataset that pairs each identity with multiple makeup styles.<n>To the best of our knowledge, this is the first work that focuses specifically on constructing a makeup dataset.
arXiv Detail & Related papers (2025-08-05T09:16:43Z) - AvatarMakeup: Realistic Makeup Transfer for 3D Animatable Head Avatars [89.31582684550723]
AvatarMakeup achieves state-of-the-art makeup transfer quality and consistency throughout animation.<n>Coherent Duplication optimize a global UV map by recoding the averaged facial attributes among the generated makeup images.<n>Experiments demonstrate that AvatarMakeup achieves state-of-the-art makeup transfer quality and consistency throughout animation.
arXiv Detail & Related papers (2025-07-03T08:26:57Z) - BeautyBank: Encoding Facial Makeup in Latent Space [2.113770213797994]
We propose BeautyBank, a novel makeup encoder that disentangles pattern features of bare and makeup faces.
Our method encodes makeup features into a high-dimensional space, preserving essential details necessary for makeup reconstruction.
We also propose a Progressive Makeup Tuning (PMT) strategy, specifically designed to enhance the preservation of detailed makeup features.
arXiv Detail & Related papers (2024-11-18T01:52:31Z) - DiffAM: Diffusion-based Adversarial Makeup Transfer for Facial Privacy Protection [60.73609509756533]
DiffAM is a novel approach to generate high-quality protected face images with adversarial makeup transferred from reference images.
Experiments demonstrate that DiffAM achieves higher visual quality and attack success rates with a gain of 12.98% under black-box setting.
arXiv Detail & Related papers (2024-05-16T08:05:36Z) - Stable-Makeup: When Real-World Makeup Transfer Meets Diffusion Model [15.380297080210559]
Current makeup transfer methods are limited to simple makeup styles, making them difficult to apply in real-world scenarios.<n>We introduce Stable-Makeup, a novel diffusion-based makeup transfer method capable of robustly transferring a wide range of real-world makeup.
arXiv Detail & Related papers (2024-03-12T15:53:14Z) - BeautyREC: Robust, Efficient, and Content-preserving Makeup Transfer [73.39598356799974]
We propose a Robust, Efficient, and Component-specific makeup transfer method (abbreviated as BeautyREC)
A component-specific correspondence to directly transfer the makeup style of a reference image to the corresponding components.
As an auxiliary, the long-range visual dependencies of Transformer are introduced for effective global makeup transfer.
arXiv Detail & Related papers (2022-12-12T12:38:27Z) - DRAN: Detailed Region-Adaptive Normalization for Conditional Image
Synthesis [25.936764522125703]
We propose a novel normalization module, named Detailed Region-Adaptive Normalization(DRAN)
It adaptively learns both fine-grained and coarse-grained style representations.
We collect a new makeup dataset (Makeup-Complex dataset) that contains a wide range of complex makeup styles.
arXiv Detail & Related papers (2021-09-29T16:19:37Z) - PSGAN++: Robust Detail-Preserving Makeup Transfer and Removal [176.47249346856393]
PSGAN++ is capable of performing both detail-preserving makeup transfer and effective makeup removal.
For makeup transfer, PSGAN++ uses a Makeup Distill Network to extract makeup information.
For makeup removal, PSGAN++ applies an Identity Distill Network to embed the identity information from with-makeup images into identity matrices.
arXiv Detail & Related papers (2021-05-26T04:37:57Z) - SOGAN: 3D-Aware Shadow and Occlusion Robust GAN for Makeup Transfer [68.38955698584758]
We propose a novel makeup transfer method called 3D-Aware Shadow and Occlusion Robust GAN (SOGAN)
We first fit a 3D face model and then disentangle the faces into shape and texture.
In the texture branch, we map the texture to the UV space and design a UV texture generator to transfer the makeup.
arXiv Detail & Related papers (2021-04-21T14:48:49Z) - Cosmetic-Aware Makeup Cleanser [109.41917954315784]
Face verification aims at determining whether a pair of face images belongs to the same identity.
Recent studies have revealed the negative impact of facial makeup on the verification performance.
This paper proposes a semanticaware makeup cleanser (SAMC) to remove facial makeup under different poses and expressions.
arXiv Detail & Related papers (2020-04-20T09:18:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.