BeautyBank: Encoding Facial Makeup in Latent Space
- URL: http://arxiv.org/abs/2411.11231v2
- Date: Sun, 24 Nov 2024 15:25:34 GMT
- Title: BeautyBank: Encoding Facial Makeup in Latent Space
- Authors: Qianwen Lu, Xingchao Yang, Takafumi Taketomi,
- Abstract summary: We propose BeautyBank, a novel makeup encoder that disentangles pattern features of bare and makeup faces.
Our method encodes makeup features into a high-dimensional space, preserving essential details necessary for makeup reconstruction.
We also propose a Progressive Makeup Tuning (PMT) strategy, specifically designed to enhance the preservation of detailed makeup features.
- Score: 2.113770213797994
- License:
- Abstract: The advancement of makeup transfer, editing, and image encoding has demonstrated their effectiveness and superior quality. However, existing makeup works primarily focus on low-dimensional features such as color distributions and patterns, limiting their versatillity across a wide range of makeup applications. Futhermore, existing high-dimensional latent encoding methods mainly target global features such as structure and style, and are less effective for tasks that require detailed attention to local color and pattern features of makeup. To overcome these limitations, we propose BeautyBank, a novel makeup encoder that disentangles pattern features of bare and makeup faces. Our method encodes makeup features into a high-dimensional space, preserving essential details necessary for makeup reconstruction and broadening the scope of potential makeup research applications. We also propose a Progressive Makeup Tuning (PMT) strategy, specifically designed to enhance the preservation of detailed makeup features while preventing the inclusion of irrelevant attributes. We further explore novel makeup applications, including facial image generation with makeup injection and makeup similarity measure. Extensive empirical experiments validate that our method offers superior task adaptability and holds significant potential for widespread application in various makeup-related fields. Furthermore, to address the lack of large-scale, high-quality paired makeup datasets in the field, we constructed the Bare-Makeup Synthesis Dataset (BMS), comprising 324,000 pairs of 512x512 pixel images of bare and makeup-enhanced faces.
Related papers
- DiffAM: Diffusion-based Adversarial Makeup Transfer for Facial Privacy Protection [60.73609509756533]
DiffAM is a novel approach to generate high-quality protected face images with adversarial makeup transferred from reference images.
Experiments demonstrate that DiffAM achieves higher visual quality and attack success rates with a gain of 12.98% under black-box setting.
arXiv Detail & Related papers (2024-05-16T08:05:36Z) - Stable-Makeup: When Real-World Makeup Transfer Meets Diffusion Model [35.01727715493926]
Current makeup transfer methods are limited to simple makeup styles, making them difficult to apply in real-world scenarios.
We introduce Stable-Makeup, a novel diffusion-based makeup transfer method capable of robustly transferring a wide range of real-world makeup.
arXiv Detail & Related papers (2024-03-12T15:53:14Z) - Automated Material Properties Extraction For Enhanced Beauty Product
Discovery and Makeup Virtual Try-on [11.214610032800396]
Our work introduces an automated pipeline that utilizes multiple customized machine learning models to extract essential material attributes from makeup product images.
We demonstrate the applicability of our approach by successfully extending it to other makeup categories like lipstick and foundation.
Our proposed method showcases its effectiveness in cross-category product discovery, specifically in recommending makeup products that perfectly match a specified outfit.
arXiv Detail & Related papers (2023-12-01T18:41:22Z) - BeautyREC: Robust, Efficient, and Content-preserving Makeup Transfer [73.39598356799974]
We propose a Robust, Efficient, and Component-specific makeup transfer method (abbreviated as BeautyREC)
A component-specific correspondence to directly transfer the makeup style of a reference image to the corresponding components.
As an auxiliary, the long-range visual dependencies of Transformer are introduced for effective global makeup transfer.
arXiv Detail & Related papers (2022-12-12T12:38:27Z) - PSGAN++: Robust Detail-Preserving Makeup Transfer and Removal [176.47249346856393]
PSGAN++ is capable of performing both detail-preserving makeup transfer and effective makeup removal.
For makeup transfer, PSGAN++ uses a Makeup Distill Network to extract makeup information.
For makeup removal, PSGAN++ applies an Identity Distill Network to embed the identity information from with-makeup images into identity matrices.
arXiv Detail & Related papers (2021-05-26T04:37:57Z) - SOGAN: 3D-Aware Shadow and Occlusion Robust GAN for Makeup Transfer [68.38955698584758]
We propose a novel makeup transfer method called 3D-Aware Shadow and Occlusion Robust GAN (SOGAN)
We first fit a 3D face model and then disentangle the faces into shape and texture.
In the texture branch, we map the texture to the UV space and design a UV texture generator to transfer the makeup.
arXiv Detail & Related papers (2021-04-21T14:48:49Z) - Lipstick ain't enough: Beyond Color Matching for In-the-Wild Makeup
Transfer [20.782984081934213]
We propose a holistic makeup transfer framework that can handle all the mentioned makeup components.
It consists of an improved color transfer branch and a novel pattern transfer branch to learn all makeup properties.
Our framework achieves the state of the art performance on both light and extreme makeup styles.
arXiv Detail & Related papers (2021-04-05T12:12:56Z) - MakeupBag: Disentangling Makeup Extraction and Application [0.0]
MakeupBag is a novel method for automatic makeup style transfer.
It allows customization and pixel specific modification of the extracted makeup style.
In a comparative analysis, MakeupBag is shown to outperform current state-of-the-art approaches.
arXiv Detail & Related papers (2020-12-03T18:44:24Z) - Bridging Composite and Real: Towards End-to-end Deep Image Matting [88.79857806542006]
We study the roles of semantics and details for image matting.
We propose a novel Glance and Focus Matting network (GFM), which employs a shared encoder and two separate decoders.
Comprehensive empirical studies have demonstrated that GFM outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-10-30T10:57:13Z) - Cosmetic-Aware Makeup Cleanser [109.41917954315784]
Face verification aims at determining whether a pair of face images belongs to the same identity.
Recent studies have revealed the negative impact of facial makeup on the verification performance.
This paper proposes a semanticaware makeup cleanser (SAMC) to remove facial makeup under different poses and expressions.
arXiv Detail & Related papers (2020-04-20T09:18:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.