FLUX-Makeup: High-Fidelity, Identity-Consistent, and Robust Makeup Transfer via Diffusion Transformer
- URL: http://arxiv.org/abs/2508.05069v1
- Date: Thu, 07 Aug 2025 06:42:40 GMT
- Title: FLUX-Makeup: High-Fidelity, Identity-Consistent, and Robust Makeup Transfer via Diffusion Transformer
- Authors: Jian Zhu, Shanyuan Liu, Liuzhuozheng Li, Yue Gong, He Wang, Bo Cheng, Yuhang Ma, Liebucha Wu, Xiaoyu Wu, Dawei Leng, Yuhui Yin, Yang Xu,
- Abstract summary: We propose FLUX-Makeup, a high-fidelity, identity-consistent, and robust makeup transfer framework.<n>Our method directly leverages source-reference image pairs to achieve superior transfer performance.<n>FLUX-Makeup achieves state-of-the-art performance, exhibiting strong robustness across diverse scenarios.
- Score: 20.199540657879037
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Makeup transfer aims to apply the makeup style from a reference face to a target face and has been increasingly adopted in practical applications. Existing GAN-based approaches typically rely on carefully designed loss functions to balance transfer quality and facial identity consistency, while diffusion-based methods often depend on additional face-control modules or algorithms to preserve identity. However, these auxiliary components tend to introduce extra errors, leading to suboptimal transfer results. To overcome these limitations, we propose FLUX-Makeup, a high-fidelity, identity-consistent, and robust makeup transfer framework that eliminates the need for any auxiliary face-control components. Instead, our method directly leverages source-reference image pairs to achieve superior transfer performance. Specifically, we build our framework upon FLUX-Kontext, using the source image as its native conditional input. Furthermore, we introduce RefLoRAInjector, a lightweight makeup feature injector that decouples the reference pathway from the backbone, enabling efficient and comprehensive extraction of makeup-related information. In parallel, we design a robust and scalable data generation pipeline to provide more accurate supervision during training. The paired makeup datasets produced by this pipeline significantly surpass the quality of all existing datasets. Extensive experiments demonstrate that FLUX-Makeup achieves state-of-the-art performance, exhibiting strong robustness across diverse scenarios.
Related papers
- DiffusionFake: Enhancing Generalization in Deepfake Detection via Guided Stable Diffusion [94.46904504076124]
Deepfake technology has made face swapping highly realistic, raising concerns about the malicious use of fabricated facial content.
Existing methods often struggle to generalize to unseen domains due to the diverse nature of facial manipulations.
We introduce DiffusionFake, a novel framework that reverses the generative process of face forgeries to enhance the generalization of detection models.
arXiv Detail & Related papers (2024-10-06T06:22:43Z) - Face Forgery Detection with Elaborate Backbone [50.914676786151574]
Face Forgery Detection aims to determine whether a digital face is real or fake.
Previous FFD models directly employ existing backbones to represent and extract forgery cues.
We propose leveraging the ViT network with self-supervised learning on real-face datasets to pre-train a backbone.
We then build a competitive backbone fine-tuning framework that strengthens the backbone's ability to extract diverse forgery cues.
arXiv Detail & Related papers (2024-09-25T13:57:16Z) - Face Adapter for Pre-Trained Diffusion Models with Fine-Grained ID and Attribute Control [59.954322727683746]
Face-Adapter is designed for high-precision and high-fidelity face editing for pre-trained diffusion models.
Face-Adapter achieves comparable or even superior performance in terms of motion control precision, ID retention capability, and generation quality.
arXiv Detail & Related papers (2024-05-21T17:50:12Z) - FaceCat: Enhancing Face Recognition Security with a Unified Diffusion Model [30.0523477092216]
Face anti-spoofing (FAS) and adversarial detection (FAD) have been regarded as critical technologies to ensure the safety of face recognition systems.
This paper aims to achieve this goal by breaking through two primary obstacles: 1) the suboptimal face feature representation and 2) the scarcity of training data.
arXiv Detail & Related papers (2024-04-14T09:01:26Z) - DiffFAE: Advancing High-fidelity One-shot Facial Appearance Editing with Space-sensitive Customization and Semantic Preservation [84.0586749616249]
This paper presents DiffFAE, a one-stage and highly-efficient diffusion-based framework tailored for high-fidelity Facial Appearance Editing.
For high-fidelity query attributes transfer, we adopt Space-sensitive Physical Customization (SPC), which ensures the fidelity and generalization ability.
In order to preserve source attributes, we introduce the Region-responsive Semantic Composition (RSC)
This module is guided to learn decoupled source-regarding features, thereby better preserving the identity and alleviating artifacts from non-facial attributes such as hair, clothes, and background.
arXiv Detail & Related papers (2024-03-26T12:53:10Z) - Stable-Makeup: When Real-World Makeup Transfer Meets Diffusion Model [15.380297080210559]
Current makeup transfer methods are limited to simple makeup styles, making them difficult to apply in real-world scenarios.<n>We introduce Stable-Makeup, a novel diffusion-based makeup transfer method capable of robustly transferring a wide range of real-world makeup.
arXiv Detail & Related papers (2024-03-12T15:53:14Z) - CLR-Face: Conditional Latent Refinement for Blind Face Restoration Using
Score-Based Diffusion Models [57.9771859175664]
Recent generative-prior-based methods have shown promising blind face restoration performance.
Generating fine-grained facial details faithful to inputs remains a challenging problem.
We introduce a diffusion-based-prior inside a VQGAN architecture that focuses on learning the distribution over uncorrupted latent embeddings.
arXiv Detail & Related papers (2024-02-08T23:51:49Z) - BeautyREC: Robust, Efficient, and Content-preserving Makeup Transfer [73.39598356799974]
We propose a Robust, Efficient, and Component-specific makeup transfer method (abbreviated as BeautyREC)
A component-specific correspondence to directly transfer the makeup style of a reference image to the corresponding components.
As an auxiliary, the long-range visual dependencies of Transformer are introduced for effective global makeup transfer.
arXiv Detail & Related papers (2022-12-12T12:38:27Z) - More comprehensive facial inversion for more effective expression
recognition [8.102564078640274]
We propose a novel generative method based on the image inversion mechanism for the FER task, termed Inversion FER (IFER)
ASIT is equipped with an image inversion discriminator that measures the cosine similarity of semantic features between source and generated images, constrained by a distribution alignment loss.
We extensively evaluate ASIT on facial datasets such as FFHQ and CelebA-HQ, showing that our approach achieves state-of-the-art facial inversion performance.
arXiv Detail & Related papers (2022-11-24T12:31:46Z) - EleGANt: Exquisite and Locally Editable GAN for Makeup Transfer [13.304362849679391]
We propose Exquisite and locally editable GAN for makeup transfer (EleGANt)
It encodes facial attributes into pyramidal feature maps to preserves high-frequency information.
EleGANt is the first to achieve customized local editing within arbitrary areas by corresponding editing on the feature maps.
arXiv Detail & Related papers (2022-07-20T11:52:07Z) - DRAN: Detailed Region-Adaptive Normalization for Conditional Image
Synthesis [25.936764522125703]
We propose a novel normalization module, named Detailed Region-Adaptive Normalization(DRAN)
It adaptively learns both fine-grained and coarse-grained style representations.
We collect a new makeup dataset (Makeup-Complex dataset) that contains a wide range of complex makeup styles.
arXiv Detail & Related papers (2021-09-29T16:19:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.