SLGAN: Style- and Latent-guided Generative Adversarial Network for
Desirable Makeup Transfer and Removal
- URL: http://arxiv.org/abs/2009.07557v3
- Date: Thu, 24 Sep 2020 13:08:51 GMT
- Title: SLGAN: Style- and Latent-guided Generative Adversarial Network for
Desirable Makeup Transfer and Removal
- Authors: Daichi Horita and Kiyoharu Aizawa
- Abstract summary: There are five features to consider when using generative adversarial networks to apply makeup to photos of the human face.
Several related works have been proposed, mainly using generative adversarial networks (GAN)
This paper closes the gap with an innovative style- and latent-guided GAN (SLGAN)
- Score: 44.290305928805836
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There are five features to consider when using generative adversarial
networks to apply makeup to photos of the human face. These features include
(1) facial components, (2) interactive color adjustments, (3) makeup
variations, (4) robustness to poses and expressions, and the (5) use of
multiple reference images. Several related works have been proposed, mainly
using generative adversarial networks (GAN). Unfortunately, none of them have
addressed all five features simultaneously. This paper closes the gap with an
innovative style- and latent-guided GAN (SLGAN). We provide a novel, perceptual
makeup loss and a style-invariant decoder that can transfer makeup styles based
on histogram matching to avoid the identity-shift problem. In our experiments,
we show that our SLGAN is better than or comparable to state-of-the-art
methods. Furthermore, we show that our proposal can interpolate facial makeup
images to determine the unique features, compare existing methods, and help
users find desirable makeup configurations.
Related papers
- Gorgeous: Create Your Desired Character Facial Makeup from Any Ideas [9.604390113485834]
$Gorgeous$ is a novel diffusion-based makeup application method.
It does not require the presence of a face in the reference images.
$Gorgeous$ can effectively generate distinctive character facial makeup inspired by the chosen thematic reference images.
arXiv Detail & Related papers (2024-04-22T07:40:53Z) - When StyleGAN Meets Stable Diffusion: a $\mathscr{W}_+$ Adapter for
Personalized Image Generation [60.305112612629465]
Text-to-image diffusion models have excelled in producing diverse, high-quality, and photo-realistic images.
We present a novel use of the extended StyleGAN embedding space $mathcalW_+$ to achieve enhanced identity preservation and disentanglement for diffusion models.
Our method adeptly generates personalized text-to-image outputs that are not only compatible with prompt descriptions but also amenable to common StyleGAN editing directions.
arXiv Detail & Related papers (2023-11-29T09:05:14Z) - MorphGANFormer: Transformer-based Face Morphing and De-Morphing [55.211984079735196]
StyleGAN-based approaches to face morphing are among the leading techniques.
We propose a transformer-based alternative to face morphing and demonstrate its superiority to StyleGAN-based methods.
arXiv Detail & Related papers (2023-02-18T19:09:11Z) - UnGANable: Defending Against GAN-based Face Manipulation [69.90981797810348]
Deepfakes pose severe threats of visual misinformation to our society.
One representative deepfake application is face manipulation that modifies a victim's facial attributes in an image.
We propose the first defense system, namely UnGANable, against GAN-inversion-based face manipulation.
arXiv Detail & Related papers (2022-10-03T14:20:01Z) - Semantics-Guided Object Removal for Facial Images: with Broad
Applicability and Robust Style Preservation [29.162655333387452]
Object removal and image inpainting in facial images is a task in which objects that occlude a facial image are specifically targeted, removed, and replaced by a properly reconstructed facial image.
Two different approaches utilizing U-net and modulated generator respectively have been widely endorsed for this task for their unique advantages but notwithstanding each method's innate disadvantages.
Here, we propose Semantics-Guided Inpainting Network (SGIN) which itself is a modification of the modulated generator, aiming to take advantage of its advanced generative capability and preserve the high-fidelity details of the original image.
arXiv Detail & Related papers (2022-09-29T00:09:12Z) - StyleSwap: Style-Based Generator Empowers Robust Face Swapping [90.05775519962303]
We introduce a concise and effective framework named StyleSwap.
Our core idea is to leverage a style-based generator to empower high-fidelity and robust face swapping.
We identify that with only minimal modifications, a StyleGAN2 architecture can successfully handle the desired information from both source and target.
arXiv Detail & Related papers (2022-09-27T16:35:16Z) - FaceController: Controllable Attribute Editing for Face in the Wild [74.56117807309576]
We propose a simple feed-forward network to generate high-fidelity manipulated faces.
By simply employing some existing and easy-obtainable prior information, our method can control, transfer, and edit diverse attributes of faces in the wild.
In our method, we decouple identity, expression, pose, and illumination using 3D priors; separate texture and colors by using region-wise style codes.
arXiv Detail & Related papers (2021-02-23T02:47:28Z) - One-shot Face Reenactment Using Appearance Adaptive Normalization [30.615671641713945]
The paper proposes a novel generative adversarial network for one-shot face reenactment.
It can animate a single face image to a different pose-and-expression while keeping its original appearance.
arXiv Detail & Related papers (2021-02-08T03:36:30Z) - StyleGAN2 Distillation for Feed-forward Image Manipulation [5.5080625617632]
StyleGAN2 is a state-of-the-art network in generating realistic images.
We propose a way to distill a particular image manipulation of StyleGAN2 into image-to-image network trained in paired way.
arXiv Detail & Related papers (2020-03-07T14:02:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.