BeautyREC: Robust, Efficient, and Content-preserving Makeup Transfer
- URL: http://arxiv.org/abs/2212.05855v1
- Date: Mon, 12 Dec 2022 12:38:27 GMT
- Title: BeautyREC: Robust, Efficient, and Content-preserving Makeup Transfer
- Authors: Qixin Yan and Chunle Guo and Jixin Zhao and Yuekun Dai and Chen Change
Loy and Chongyi Li
- Abstract summary: We propose a Robust, Efficient, and Component-specific makeup transfer method (abbreviated as BeautyREC)
A component-specific correspondence to directly transfer the makeup style of a reference image to the corresponding components.
As an auxiliary, the long-range visual dependencies of Transformer are introduced for effective global makeup transfer.
- Score: 73.39598356799974
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we propose a Robust, Efficient, and Component-specific makeup
transfer method (abbreviated as BeautyREC). A unique departure from prior
methods that leverage global attention, simply concatenate features, or
implicitly manipulate features in latent space, we propose a component-specific
correspondence to directly transfer the makeup style of a reference image to
the corresponding components (e.g., skin, lips, eyes) of a source image, making
elaborate and accurate local makeup transfer. As an auxiliary, the long-range
visual dependencies of Transformer are introduced for effective global makeup
transfer. Instead of the commonly used cycle structure that is complex and
unstable, we employ a content consistency loss coupled with a content encoder
to implement efficient single-path makeup transfer. The key insights of this
study are modeling component-specific correspondence for local makeup transfer,
capturing long-range dependencies for global makeup transfer, and enabling
efficient makeup transfer via a single-path structure. We also contribute
BeautyFace, a makeup transfer dataset to supplement existing datasets. This
dataset contains 3,000 faces, covering more diverse makeup styles, face poses,
and races. Each face has annotated parsing map. Extensive experiments
demonstrate the effectiveness of our method against state-of-the-art methods.
Besides, our method is appealing as it is with only 1M parameters,
outperforming the state-of-the-art methods (BeautyGAN: 8.43M, PSGAN: 12.62M,
SCGAN: 15.30M, CPM: 9.24M, SSAT: 10.48M).
Related papers
- Stable-Makeup: When Real-World Makeup Transfer Meets Diffusion Model [35.01727715493926]
Current makeup transfer methods are limited to simple makeup styles, making them difficult to apply in real-world scenarios.
We introduce Stable-Makeup, a novel diffusion-based makeup transfer method capable of robustly transferring a wide range of real-world makeup.
arXiv Detail & Related papers (2024-03-12T15:53:14Z) - SARA: Controllable Makeup Transfer with Spatial Alignment and Region-Adaptive Normalization [67.90315365909244]
We propose a novel Spatial Alignment and Region-Adaptive normalization method (SARA) in this paper.
Our method generates detailed makeup transfer results that can handle large spatial misalignments and achieve part-specific and shade-controllable makeup transfer.
Experimental results show that our SARA method outperforms existing methods and achieves state-of-the-art performance on two public datasets.
arXiv Detail & Related papers (2023-11-28T14:46:51Z) - EleGANt: Exquisite and Locally Editable GAN for Makeup Transfer [13.304362849679391]
We propose Exquisite and locally editable GAN for makeup transfer (EleGANt)
It encodes facial attributes into pyramidal feature maps to preserves high-frequency information.
EleGANt is the first to achieve customized local editing within arbitrary areas by corresponding editing on the feature maps.
arXiv Detail & Related papers (2022-07-20T11:52:07Z) - CCPL: Contrastive Coherence Preserving Loss for Versatile Style Transfer [58.020470877242865]
We devise a universally versatile style transfer method capable of performing artistic, photo-realistic, and video style transfer jointly.
We make a mild and reasonable assumption that global inconsistency is dominated by local inconsistencies and devise a generic Contrastive Coherence Preserving Loss (CCPL) applied to local patches.
CCPL can preserve the coherence of the content source during style transfer without degrading stylization.
arXiv Detail & Related papers (2022-07-11T12:09:41Z) - SSAT: A Symmetric Semantic-Aware Transformer Network for Makeup Transfer
and Removal [17.512402192317992]
We propose a unified Symmetric Semantic-Aware Transformer (SSAT) network to realize makeup transfer and removal simultaneously.
A novel SSCFT module and a weakly supervised semantic loss are proposed to model and facilitate the establishment of accurate semantic correspondence.
Experiments show that our method obtains more visually accurate makeup transfer results.
arXiv Detail & Related papers (2021-12-07T11:08:12Z) - PSGAN++: Robust Detail-Preserving Makeup Transfer and Removal [176.47249346856393]
PSGAN++ is capable of performing both detail-preserving makeup transfer and effective makeup removal.
For makeup transfer, PSGAN++ uses a Makeup Distill Network to extract makeup information.
For makeup removal, PSGAN++ applies an Identity Distill Network to embed the identity information from with-makeup images into identity matrices.
arXiv Detail & Related papers (2021-05-26T04:37:57Z) - Facial Attribute Transformers for Precise and Robust Makeup Transfer [79.41060385695977]
We propose a novel Facial Attribute Transformer (FAT) and its variant Spatial FAT for high-quality makeup transfer.
FAT is able to model the semantic correspondences and interactions between the source face and reference face, and then precisely estimate and transfer the facial attributes.
We also integrate thin plate splines (TPS) into FAT, thus creating Spatial FAT, which is the first method that can transfer geometric attributes in addition to color and texture.
arXiv Detail & Related papers (2021-04-07T03:39:02Z) - FaceController: Controllable Attribute Editing for Face in the Wild [74.56117807309576]
We propose a simple feed-forward network to generate high-fidelity manipulated faces.
By simply employing some existing and easy-obtainable prior information, our method can control, transfer, and edit diverse attributes of faces in the wild.
In our method, we decouple identity, expression, pose, and illumination using 3D priors; separate texture and colors by using region-wise style codes.
arXiv Detail & Related papers (2021-02-23T02:47:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.