Stable-Makeup: When Real-World Makeup Transfer Meets Diffusion Model
- URL: http://arxiv.org/abs/2403.07764v1
- Date: Tue, 12 Mar 2024 15:53:14 GMT
- Title: Stable-Makeup: When Real-World Makeup Transfer Meets Diffusion Model
- Authors: Yuxuan Zhang, Lifu Wei, Qing Zhang, Yiren Song, Jiaming Liu, Huaxia
Li, Xu Tang, Yao Hu, Haibo Zhao
- Abstract summary: Current makeup transfer methods are limited to simple makeup styles, making them difficult to apply in real-world scenarios.
We introduce Stable-Makeup, a novel diffusion-based makeup transfer method capable of robustly transferring a wide range of real-world makeup.
- Score: 35.01727715493926
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current makeup transfer methods are limited to simple makeup styles, making
them difficult to apply in real-world scenarios. In this paper, we introduce
Stable-Makeup, a novel diffusion-based makeup transfer method capable of
robustly transferring a wide range of real-world makeup, onto user-provided
faces. Stable-Makeup is based on a pre-trained diffusion model and utilizes a
Detail-Preserving (D-P) makeup encoder to encode makeup details. It also
employs content and structural control modules to preserve the content and
structural information of the source image. With the aid of our newly added
makeup cross-attention layers in U-Net, we can accurately transfer the detailed
makeup to the corresponding position in the source image. After
content-structure decoupling training, Stable-Makeup can maintain content and
the facial structure of the source image. Moreover, our method has demonstrated
strong robustness and generalizability, making it applicable to varioustasks
such as cross-domain makeup transfer, makeup-guided text-to-image generation
and so on. Extensive experiments have demonstrated that our approach delivers
state-of-the-art (SOTA) results among existing makeup transfer methods and
exhibits a highly promising with broad potential applications in various
related fields.
Related papers
- DiffAM: Diffusion-based Adversarial Makeup Transfer for Facial Privacy Protection [60.73609509756533]
DiffAM is a novel approach to generate high-quality protected face images with adversarial makeup transferred from reference images.
Experiments demonstrate that DiffAM achieves higher visual quality and attack success rates with a gain of 12.98% under black-box setting.
arXiv Detail & Related papers (2024-05-16T08:05:36Z) - BeautyREC: Robust, Efficient, and Content-preserving Makeup Transfer [73.39598356799974]
We propose a Robust, Efficient, and Component-specific makeup transfer method (abbreviated as BeautyREC)
A component-specific correspondence to directly transfer the makeup style of a reference image to the corresponding components.
As an auxiliary, the long-range visual dependencies of Transformer are introduced for effective global makeup transfer.
arXiv Detail & Related papers (2022-12-12T12:38:27Z) - EleGANt: Exquisite and Locally Editable GAN for Makeup Transfer [13.304362849679391]
We propose Exquisite and locally editable GAN for makeup transfer (EleGANt)
It encodes facial attributes into pyramidal feature maps to preserves high-frequency information.
EleGANt is the first to achieve customized local editing within arbitrary areas by corresponding editing on the feature maps.
arXiv Detail & Related papers (2022-07-20T11:52:07Z) - PSGAN++: Robust Detail-Preserving Makeup Transfer and Removal [176.47249346856393]
PSGAN++ is capable of performing both detail-preserving makeup transfer and effective makeup removal.
For makeup transfer, PSGAN++ uses a Makeup Distill Network to extract makeup information.
For makeup removal, PSGAN++ applies an Identity Distill Network to embed the identity information from with-makeup images into identity matrices.
arXiv Detail & Related papers (2021-05-26T04:37:57Z) - SOGAN: 3D-Aware Shadow and Occlusion Robust GAN for Makeup Transfer [68.38955698584758]
We propose a novel makeup transfer method called 3D-Aware Shadow and Occlusion Robust GAN (SOGAN)
We first fit a 3D face model and then disentangle the faces into shape and texture.
In the texture branch, we map the texture to the UV space and design a UV texture generator to transfer the makeup.
arXiv Detail & Related papers (2021-04-21T14:48:49Z) - Lipstick ain't enough: Beyond Color Matching for In-the-Wild Makeup
Transfer [20.782984081934213]
We propose a holistic makeup transfer framework that can handle all the mentioned makeup components.
It consists of an improved color transfer branch and a novel pattern transfer branch to learn all makeup properties.
Our framework achieves the state of the art performance on both light and extreme makeup styles.
arXiv Detail & Related papers (2021-04-05T12:12:56Z) - MakeupBag: Disentangling Makeup Extraction and Application [0.0]
MakeupBag is a novel method for automatic makeup style transfer.
It allows customization and pixel specific modification of the extracted makeup style.
In a comparative analysis, MakeupBag is shown to outperform current state-of-the-art approaches.
arXiv Detail & Related papers (2020-12-03T18:44:24Z) - Cosmetic-Aware Makeup Cleanser [109.41917954315784]
Face verification aims at determining whether a pair of face images belongs to the same identity.
Recent studies have revealed the negative impact of facial makeup on the verification performance.
This paper proposes a semanticaware makeup cleanser (SAMC) to remove facial makeup under different poses and expressions.
arXiv Detail & Related papers (2020-04-20T09:18:23Z) - Local Facial Makeup Transfer via Disentangled Representation [18.326829657548025]
We propose a novel unified adversarial disentangling network to decompose face images into four independent components, i.e., personal identity, lips makeup style, eyes makeup style and face makeup style.
Our approach can produce more realistic and accurate makeup transfer results compared to the state-of-the-art methods.
arXiv Detail & Related papers (2020-03-27T00:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.