SARA: Controllable Makeup Transfer with Spatial Alignment and Region-Adaptive Normalization
- URL: http://arxiv.org/abs/2311.16828v2
- Date: Tue, 21 May 2024 13:43:16 GMT
- Title: SARA: Controllable Makeup Transfer with Spatial Alignment and Region-Adaptive Normalization
- Authors: Xiaojing Zhong, Xinyi Huang, Zhonghua Wu, Guosheng Lin, Qingyao Wu,
- Abstract summary: We propose a novel Spatial Alignment and Region-Adaptive normalization method (SARA) in this paper.
Our method generates detailed makeup transfer results that can handle large spatial misalignments and achieve part-specific and shade-controllable makeup transfer.
Experimental results show that our SARA method outperforms existing methods and achieves state-of-the-art performance on two public datasets.
- Score: 67.90315365909244
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Makeup transfer is a process of transferring the makeup style from a reference image to the source images, while preserving the source images' identities. This technique is highly desirable and finds many applications. However, existing methods lack fine-level control of the makeup style, making it challenging to achieve high-quality results when dealing with large spatial misalignments. To address this problem, we propose a novel Spatial Alignment and Region-Adaptive normalization method (SARA) in this paper. Our method generates detailed makeup transfer results that can handle large spatial misalignments and achieve part-specific and shade-controllable makeup transfer. Specifically, SARA comprises three modules: Firstly, a spatial alignment module that preserves the spatial context of makeup and provides a target semantic map for guiding the shape-independent style codes. Secondly, a region-adaptive normalization module that decouples shape and makeup style using per-region encoding and normalization, which facilitates the elimination of spatial misalignments. Lastly, a makeup fusion module blends identity features and makeup style by injecting learned scale and bias parameters. Experimental results show that our SARA method outperforms existing methods and achieves state-of-the-art performance on two public datasets.
Related papers
- Towards Effective Image Manipulation Detection with Proposal Contrastive
Learning [61.5469708038966]
We propose Proposal Contrastive Learning (PCL) for effective image manipulation detection.
Our PCL consists of a two-stream architecture by extracting two types of global features from RGB and noise views respectively.
Our PCL can be easily adapted to unlabeled data in practice, which can reduce manual labeling costs and promote more generalizable features.
arXiv Detail & Related papers (2022-10-16T13:30:13Z) - EleGANt: Exquisite and Locally Editable GAN for Makeup Transfer [13.304362849679391]
We propose Exquisite and locally editable GAN for makeup transfer (EleGANt)
It encodes facial attributes into pyramidal feature maps to preserves high-frequency information.
EleGANt is the first to achieve customized local editing within arbitrary areas by corresponding editing on the feature maps.
arXiv Detail & Related papers (2022-07-20T11:52:07Z) - Towards Full-to-Empty Room Generation with Structure-Aware Feature
Encoding and Soft Semantic Region-Adaptive Normalization [67.64622529651677]
We propose a simple yet effective adjusted fully differentiable soft semantic region-adaptive normalization module (softSEAN) block.
Our approach besides the advantages of mitigating training complexity and non-differentiability issues surpasses the compared methods both quantitatively and qualitatively.
Our softSEAN block can be used as a drop-in module for existing discriminative and generative models.
arXiv Detail & Related papers (2021-12-10T09:00:13Z) - SSAT: A Symmetric Semantic-Aware Transformer Network for Makeup Transfer
and Removal [17.512402192317992]
We propose a unified Symmetric Semantic-Aware Transformer (SSAT) network to realize makeup transfer and removal simultaneously.
A novel SSCFT module and a weakly supervised semantic loss are proposed to model and facilitate the establishment of accurate semantic correspondence.
Experiments show that our method obtains more visually accurate makeup transfer results.
arXiv Detail & Related papers (2021-12-07T11:08:12Z) - Low-Rank Subspaces in GANs [101.48350547067628]
This work introduces low-rank subspaces that enable more precise control of GAN generation.
LowRankGAN is able to find the low-dimensional representation of attribute manifold.
Experiments on state-of-the-art GAN models (including StyleGAN2 and BigGAN) trained on various datasets demonstrate the effectiveness of our LowRankGAN.
arXiv Detail & Related papers (2021-06-08T16:16:32Z) - Controllable Person Image Synthesis with Spatially-Adaptive Warped
Normalization [72.65828901909708]
Controllable person image generation aims to produce realistic human images with desirable attributes.
We introduce a novel Spatially-Adaptive Warped Normalization (SAWN), which integrates a learned flow-field to warp modulation parameters.
We propose a novel self-training part replacement strategy to refine the pretrained model for the texture-transfer task.
arXiv Detail & Related papers (2021-05-31T07:07:44Z) - SAFIN: Arbitrary Style Transfer With Self-Attentive Factorized Instance
Normalization [71.85169368997738]
Artistic style transfer aims to transfer the style characteristics of one image onto another image while retaining its content.
Self-Attention-based approaches have tackled this issue with partial success but suffer from unwanted artifacts.
This paper aims to combine the best of both worlds: self-attention and normalization.
arXiv Detail & Related papers (2021-05-13T08:01:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.