Towards High-Fidelity, Identity-Preserving Real-Time Makeup Transfer: Decoupling Style Generation
- URL: http://arxiv.org/abs/2509.02445v2
- Date: Thu, 04 Sep 2025 21:00:18 GMT
- Title: Towards High-Fidelity, Identity-Preserving Real-Time Makeup Transfer: Decoupling Style Generation
- Authors: Lydia Kin Ching Chau, Zhi Yu, Ruowei Jiang,
- Abstract summary: We present a novel framework for real-time virtual makeup try-on.<n>It achieves high-fidelity, identity-preserving cosmetic transfer with robust temporal consistency.
- Score: 10.030819778997836
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present a novel framework for real-time virtual makeup try-on that achieves high-fidelity, identity-preserving cosmetic transfer with robust temporal consistency. In live makeup transfer applications, it is critical to synthesize temporally coherent results that accurately replicate fine-grained makeup and preserve user's identity. However, existing methods often struggle to disentangle semitransparent cosmetics from skin tones and other identify features, causing identity shifts and raising fairness concerns. Furthermore, current methods lack real-time capabilities and fail to maintain temporal consistency, limiting practical adoption. To address these challenges, we decouple makeup transfer into two steps: transparent makeup mask extraction and graphics-based mask rendering. After the makeup extraction step, the makeup rendering can be performed in real time, enabling live makeup try-on. Our makeup extraction model trained on pseudo-ground-truth data generated via two complementary methods: a graphics-based rendering pipeline and an unsupervised k-means clustering approach. To further enhance transparency estimation and color fidelity, we propose specialized training objectives, including alpha-weighted reconstruction and lip color losses. Our method achieves robust makeup transfer across diverse poses, expressions, and skin tones while preserving temporal smoothness. Extensive experiments demonstrate that our approach outperforms existing baselines in capturing fine details, maintaining temporal stability, and preserving identity integrity.
Related papers
- Supervised makeup transfer with a curated dataset: Decoupling identity and makeup features for enhanced transformation [21.71636658071446]
Diffusion models have shown strong progress in generative tasks, offering a more stable alternative to GAN-based approaches for makeup transfer.<n>Existing methods often suffer from limited datasets, poor disentanglement between identity and makeup features, and weak controllability.<n>We construct a curated high-quality dataset using a train-generate-filter-retrain strategy that combines synthetic, realistic, and filtered samples to improve diversity and fidelity.<n>Third, we propose a text-guided mechanism that allows fine-grained and region-specific control, enabling users to modify eyes, lips, or face makeup with natural language prompts.
arXiv Detail & Related papers (2026-01-31T13:46:38Z) - DreamMakeup: Face Makeup Customization using Latent Diffusion Models [42.98379243094055]
We introduce DreamMakup, a novel training-free Diffusion model based Makeup Customization method.<n>Our model demonstrates notable improvements over existing GAN-based and recent diffusion-based frameworks.
arXiv Detail & Related papers (2025-10-13T02:29:23Z) - FLUX-Makeup: High-Fidelity, Identity-Consistent, and Robust Makeup Transfer via Diffusion Transformer [20.199540657879037]
We propose FLUX-Makeup, a high-fidelity, identity-consistent, and robust makeup transfer framework.<n>Our method directly leverages source-reference image pairs to achieve superior transfer performance.<n>FLUX-Makeup achieves state-of-the-art performance, exhibiting strong robustness across diverse scenarios.
arXiv Detail & Related papers (2025-08-07T06:42:40Z) - FFHQ-Makeup: Paired Synthetic Makeup Dataset with Facial Consistency Across Multiple Styles [1.4680035572775534]
We present FFHQ-Makeup, a high-quality synthetic makeup dataset that pairs each identity with multiple makeup styles.<n>To the best of our knowledge, this is the first work that focuses specifically on constructing a makeup dataset.
arXiv Detail & Related papers (2025-08-05T09:16:43Z) - AvatarMakeup: Realistic Makeup Transfer for 3D Animatable Head Avatars [89.31582684550723]
AvatarMakeup achieves state-of-the-art makeup transfer quality and consistency throughout animation.<n>Coherent Duplication optimize a global UV map by recoding the averaged facial attributes among the generated makeup images.<n>Experiments demonstrate that AvatarMakeup achieves state-of-the-art makeup transfer quality and consistency throughout animation.
arXiv Detail & Related papers (2025-07-03T08:26:57Z) - Realistic and Efficient Face Swapping: A Unified Approach with Diffusion Models [69.50286698375386]
We propose a novel approach that better harnesses diffusion models for face-swapping.
We introduce a mask shuffling technique during inpainting training, which allows us to create a so-called universal model for swapping.
Ours is a relatively unified approach and so it is resilient to errors in other off-the-shelf models.
arXiv Detail & Related papers (2024-09-11T13:43:53Z) - CLR-Face: Conditional Latent Refinement for Blind Face Restoration Using
Score-Based Diffusion Models [57.9771859175664]
Recent generative-prior-based methods have shown promising blind face restoration performance.
Generating fine-grained facial details faithful to inputs remains a challenging problem.
We introduce a diffusion-based-prior inside a VQGAN architecture that focuses on learning the distribution over uncorrupted latent embeddings.
arXiv Detail & Related papers (2024-02-08T23:51:49Z) - Personalized Face Inpainting with Diffusion Models by Parallel Visual
Attention [55.33017432880408]
This paper proposes the use of Parallel Visual Attention (PVA) in conjunction with diffusion models to improve inpainting results.
We train the added attention modules and identity encoder on CelebAHQ-IDI, a dataset proposed for identity-preserving face inpainting.
Experiments demonstrate that PVA attains unparalleled identity resemblance in both face inpainting and face inpainting with language guidance tasks.
arXiv Detail & Related papers (2023-12-06T15:39:03Z) - BeautyREC: Robust, Efficient, and Content-preserving Makeup Transfer [73.39598356799974]
We propose a Robust, Efficient, and Component-specific makeup transfer method (abbreviated as BeautyREC)
A component-specific correspondence to directly transfer the makeup style of a reference image to the corresponding components.
As an auxiliary, the long-range visual dependencies of Transformer are introduced for effective global makeup transfer.
arXiv Detail & Related papers (2022-12-12T12:38:27Z) - DRAN: Detailed Region-Adaptive Normalization for Conditional Image
Synthesis [25.936764522125703]
We propose a novel normalization module, named Detailed Region-Adaptive Normalization(DRAN)
It adaptively learns both fine-grained and coarse-grained style representations.
We collect a new makeup dataset (Makeup-Complex dataset) that contains a wide range of complex makeup styles.
arXiv Detail & Related papers (2021-09-29T16:19:37Z) - Cosmetic-Aware Makeup Cleanser [109.41917954315784]
Face verification aims at determining whether a pair of face images belongs to the same identity.
Recent studies have revealed the negative impact of facial makeup on the verification performance.
This paper proposes a semanticaware makeup cleanser (SAMC) to remove facial makeup under different poses and expressions.
arXiv Detail & Related papers (2020-04-20T09:18:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.