SSAT: A Symmetric Semantic-Aware Transformer Network for Makeup Transfer
and Removal
- URL: http://arxiv.org/abs/2112.03631v1
- Date: Tue, 7 Dec 2021 11:08:12 GMT
- Title: SSAT: A Symmetric Semantic-Aware Transformer Network for Makeup Transfer
and Removal
- Authors: Zhaoyang Sun and Yaxiong Chen and Shengwu Xiong
- Abstract summary: We propose a unified Symmetric Semantic-Aware Transformer (SSAT) network to realize makeup transfer and removal simultaneously.
A novel SSCFT module and a weakly supervised semantic loss are proposed to model and facilitate the establishment of accurate semantic correspondence.
Experiments show that our method obtains more visually accurate makeup transfer results.
- Score: 17.512402192317992
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Makeup transfer is not only to extract the makeup style of the reference
image, but also to render the makeup style to the semantic corresponding
position of the target image. However, most existing methods focus on the
former and ignore the latter, resulting in a failure to achieve desired
results. To solve the above problems, we propose a unified Symmetric
Semantic-Aware Transformer (SSAT) network, which incorporates semantic
correspondence learning to realize makeup transfer and removal simultaneously.
In SSAT, a novel Symmetric Semantic Corresponding Feature Transfer (SSCFT)
module and a weakly supervised semantic loss are proposed to model and
facilitate the establishment of accurate semantic correspondence. In the
generation process, the extracted makeup features are spatially distorted by
SSCFT to achieve semantic alignment with the target image, then the distorted
makeup features are combined with unmodified makeup irrelevant features to
produce the final result. Experiments show that our method obtains more
visually accurate makeup transfer results, and user study in comparison with
other state-of-the-art makeup transfer methods reflects the superiority of our
method. Besides, we verify the robustness of the proposed method in the
difference of expression and pose, object occlusion scenes, and extend it to
video makeup transfer. Code will be available at
https://gitee.com/sunzhaoyang0304/ssat-msp.
Related papers
- Semantic Image Synthesis with Unconditional Generator [8.65146533481257]
We propose to employ a pre-trained unconditional generator and rearrange its feature maps according to proxy masks.
The proxy masks are prepared from the feature maps of random samples in the generator by simple clustering.
Our method is versatile across various applications such as free-form spatial editing of real images, sketch-to-photo, and even scribble-to-photo.
arXiv Detail & Related papers (2024-02-22T09:10:28Z) - SARA: Controllable Makeup Transfer with Spatial Alignment and Region-Adaptive Normalization [67.90315365909244]
We propose a novel Spatial Alignment and Region-Adaptive normalization method (SARA) in this paper.
Our method generates detailed makeup transfer results that can handle large spatial misalignments and achieve part-specific and shade-controllable makeup transfer.
Experimental results show that our SARA method outperforms existing methods and achieves state-of-the-art performance on two public datasets.
arXiv Detail & Related papers (2023-11-28T14:46:51Z) - SemST: Semantically Consistent Multi-Scale Image Translation via
Structure-Texture Alignment [32.41465452443824]
Unsupervised image-to-image (I2I) translation learns cross-domain image mapping that transfers input from the source domain to output in the target domain.
Different semantic statistics in source and target domains result in content discrepancy known as semantic distortion.
A novel I2I method that maintains semantic consistency in translation is proposed and named SemST in this work.
arXiv Detail & Related papers (2023-10-08T03:44:58Z) - BeautyREC: Robust, Efficient, and Content-preserving Makeup Transfer [73.39598356799974]
We propose a Robust, Efficient, and Component-specific makeup transfer method (abbreviated as BeautyREC)
A component-specific correspondence to directly transfer the makeup style of a reference image to the corresponding components.
As an auxiliary, the long-range visual dependencies of Transformer are introduced for effective global makeup transfer.
arXiv Detail & Related papers (2022-12-12T12:38:27Z) - More comprehensive facial inversion for more effective expression
recognition [8.102564078640274]
We propose a novel generative method based on the image inversion mechanism for the FER task, termed Inversion FER (IFER)
ASIT is equipped with an image inversion discriminator that measures the cosine similarity of semantic features between source and generated images, constrained by a distribution alignment loss.
We extensively evaluate ASIT on facial datasets such as FFHQ and CelebA-HQ, showing that our approach achieves state-of-the-art facial inversion performance.
arXiv Detail & Related papers (2022-11-24T12:31:46Z) - Diffusion-based Image Translation using Disentangled Style and Content
Representation [51.188396199083336]
Diffusion-based image translation guided by semantic texts or a single target image has enabled flexible style transfer.
It is often difficult to maintain the original content of the image during the reverse diffusion.
We present a novel diffusion-based unsupervised image translation method using disentangled style and content representation.
Our experimental results show that the proposed method outperforms state-of-the-art baseline models in both text-guided and image-guided translation tasks.
arXiv Detail & Related papers (2022-09-30T06:44:37Z) - Learning Disentangled Representation for One-shot Progressive Face
Swapping [65.98684203654908]
We present a simple yet efficient method named FaceSwapper, for one-shot face swapping based on Generative Adversarial Networks.
Our method consists of a disentangled representation module and a semantic-guided fusion module.
Our results show that our method achieves state-of-the-art results on benchmark with fewer training samples.
arXiv Detail & Related papers (2022-03-24T11:19:04Z) - Semi-parametric Makeup Transfer via Semantic-aware Correspondence [99.02329132102098]
Large discrepancy between source non-makeup image and reference makeup image is one of key challenges in makeup transfer.
Non-parametric techniques have a high potential for addressing the pose, expression, and occlusion discrepancies.
We propose a textbfSemi-textbfparametric textbfMakeup textbfTransfer (SpMT) method, which combines the reciprocal strengths of non-parametric and parametric mechanisms.
arXiv Detail & Related papers (2022-03-04T12:54:19Z) - SAFIN: Arbitrary Style Transfer With Self-Attentive Factorized Instance
Normalization [71.85169368997738]
Artistic style transfer aims to transfer the style characteristics of one image onto another image while retaining its content.
Self-Attention-based approaches have tackled this issue with partial success but suffer from unwanted artifacts.
This paper aims to combine the best of both worlds: self-attention and normalization.
arXiv Detail & Related papers (2021-05-13T08:01:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.