MakeupBag: Disentangling Makeup Extraction and Application
- URL: http://arxiv.org/abs/2012.02157v1
- Date: Thu, 3 Dec 2020 18:44:24 GMT
- Title: MakeupBag: Disentangling Makeup Extraction and Application
- Authors: Dokhyam Hoshen
- Abstract summary: MakeupBag is a novel method for automatic makeup style transfer.
It allows customization and pixel specific modification of the extracted makeup style.
In a comparative analysis, MakeupBag is shown to outperform current state-of-the-art approaches.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces MakeupBag, a novel method for automatic makeup style
transfer. Our proposed technique can transfer a new makeup style from a
reference face image to another previously unseen facial photograph. We solve
makeup disentanglement and facial makeup application as separable objectives,
in contrast to other current deep methods that entangle the two tasks.
MakeupBag presents a significant advantage for our approach as it allows
customization and pixel specific modification of the extracted makeup style,
which is not possible using current methods. Extensive experiments, both
qualitative and numerical, are conducted demonstrating the high quality and
accuracy of the images produced by our method. Furthermore, in contrast to most
other current methods, MakeupBag tackles both classical and extreme and costume
makeup transfer. In a comparative analysis, MakeupBag is shown to outperform
current state-of-the-art approaches.
Related papers
- DiffAM: Diffusion-based Adversarial Makeup Transfer for Facial Privacy Protection [60.73609509756533]
DiffAM is a novel approach to generate high-quality protected face images with adversarial makeup transferred from reference images.
Experiments demonstrate that DiffAM achieves higher visual quality and attack success rates with a gain of 12.98% under black-box setting.
arXiv Detail & Related papers (2024-05-16T08:05:36Z) - Gorgeous: Create Your Desired Character Facial Makeup from Any Ideas [9.604390113485834]
$Gorgeous$ is a novel diffusion-based makeup application method.
It does not require the presence of a face in the reference images.
$Gorgeous$ can effectively generate distinctive character facial makeup inspired by the chosen thematic reference images.
arXiv Detail & Related papers (2024-04-22T07:40:53Z) - Stable-Makeup: When Real-World Makeup Transfer Meets Diffusion Model [35.01727715493926]
Current makeup transfer methods are limited to simple makeup styles, making them difficult to apply in real-world scenarios.
We introduce Stable-Makeup, a novel diffusion-based makeup transfer method capable of robustly transferring a wide range of real-world makeup.
arXiv Detail & Related papers (2024-03-12T15:53:14Z) - BeautyREC: Robust, Efficient, and Content-preserving Makeup Transfer [73.39598356799974]
We propose a Robust, Efficient, and Component-specific makeup transfer method (abbreviated as BeautyREC)
A component-specific correspondence to directly transfer the makeup style of a reference image to the corresponding components.
As an auxiliary, the long-range visual dependencies of Transformer are introduced for effective global makeup transfer.
arXiv Detail & Related papers (2022-12-12T12:38:27Z) - PSGAN++: Robust Detail-Preserving Makeup Transfer and Removal [176.47249346856393]
PSGAN++ is capable of performing both detail-preserving makeup transfer and effective makeup removal.
For makeup transfer, PSGAN++ uses a Makeup Distill Network to extract makeup information.
For makeup removal, PSGAN++ applies an Identity Distill Network to embed the identity information from with-makeup images into identity matrices.
arXiv Detail & Related papers (2021-05-26T04:37:57Z) - SOGAN: 3D-Aware Shadow and Occlusion Robust GAN for Makeup Transfer [68.38955698584758]
We propose a novel makeup transfer method called 3D-Aware Shadow and Occlusion Robust GAN (SOGAN)
We first fit a 3D face model and then disentangle the faces into shape and texture.
In the texture branch, we map the texture to the UV space and design a UV texture generator to transfer the makeup.
arXiv Detail & Related papers (2021-04-21T14:48:49Z) - Lipstick ain't enough: Beyond Color Matching for In-the-Wild Makeup
Transfer [20.782984081934213]
We propose a holistic makeup transfer framework that can handle all the mentioned makeup components.
It consists of an improved color transfer branch and a novel pattern transfer branch to learn all makeup properties.
Our framework achieves the state of the art performance on both light and extreme makeup styles.
arXiv Detail & Related papers (2021-04-05T12:12:56Z) - SLGAN: Style- and Latent-guided Generative Adversarial Network for
Desirable Makeup Transfer and Removal [44.290305928805836]
There are five features to consider when using generative adversarial networks to apply makeup to photos of the human face.
Several related works have been proposed, mainly using generative adversarial networks (GAN)
This paper closes the gap with an innovative style- and latent-guided GAN (SLGAN)
arXiv Detail & Related papers (2020-09-16T08:54:20Z) - Cosmetic-Aware Makeup Cleanser [109.41917954315784]
Face verification aims at determining whether a pair of face images belongs to the same identity.
Recent studies have revealed the negative impact of facial makeup on the verification performance.
This paper proposes a semanticaware makeup cleanser (SAMC) to remove facial makeup under different poses and expressions.
arXiv Detail & Related papers (2020-04-20T09:18:23Z) - Local Facial Makeup Transfer via Disentangled Representation [18.326829657548025]
We propose a novel unified adversarial disentangling network to decompose face images into four independent components, i.e., personal identity, lips makeup style, eyes makeup style and face makeup style.
Our approach can produce more realistic and accurate makeup transfer results compared to the state-of-the-art methods.
arXiv Detail & Related papers (2020-03-27T00:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.