FacialGAN: Style Transfer and Attribute Manipulation on Synthetic Faces
- URL: http://arxiv.org/abs/2110.09425v1
- Date: Mon, 18 Oct 2021 15:53:38 GMT
- Title: FacialGAN: Style Transfer and Attribute Manipulation on Synthetic Faces
- Authors: Ricard Durall, Jireh Jam, Dominik Strassel, Moi Hoon Yap, Janis Keuper
- Abstract summary: FacialGAN is a novel framework enabling simultaneous rich style transfers and interactive facial attributes manipulation.
We show our model's capacity in producing visually compelling results in style transfer, attribute manipulation, diversity and face verification.
- Score: 9.664892091493586
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Facial image manipulation is a generation task where the output face is
shifted towards an intended target direction in terms of facial attribute and
styles. Recent works have achieved great success in various editing techniques
such as style transfer and attribute translation. However, current approaches
are either focusing on pure style transfer, or on the translation of predefined
sets of attributes with restricted interactivity. To address this issue, we
propose FacialGAN, a novel framework enabling simultaneous rich style transfers
and interactive facial attributes manipulation. While preserving the identity
of a source image, we transfer the diverse styles of a target image to the
source image. We then incorporate the geometry information of a segmentation
mask to provide a fine-grained manipulation of facial attributes. Finally, a
multi-objective learning strategy is introduced to optimize the loss of each
specific tasks. Experiments on the CelebA-HQ dataset, with CelebAMask-HQ as
semantic mask labels, show our model's capacity in producing visually
compelling results in style transfer, attribute manipulation, diversity and
face verification. For reproducibility, we provide an interactive open-source
tool to perform facial manipulations, and the Pytorch implementation of the
model.
Related papers
- When StyleGAN Meets Stable Diffusion: a $\mathscr{W}_+$ Adapter for
Personalized Image Generation [60.305112612629465]
Text-to-image diffusion models have excelled in producing diverse, high-quality, and photo-realistic images.
We present a novel use of the extended StyleGAN embedding space $mathcalW_+$ to achieve enhanced identity preservation and disentanglement for diffusion models.
Our method adeptly generates personalized text-to-image outputs that are not only compatible with prompt descriptions but also amenable to common StyleGAN editing directions.
arXiv Detail & Related papers (2023-11-29T09:05:14Z) - Face Transformer: Towards High Fidelity and Accurate Face Swapping [54.737909435708936]
Face swapping aims to generate swapped images that fuse the identity of source faces and the attributes of target faces.
This paper presents Face Transformer, a novel face swapping network that can accurately preserve source identities and target attributes simultaneously.
arXiv Detail & Related papers (2023-04-05T15:51:44Z) - Learning Disentangled Representation for One-shot Progressive Face
Swapping [65.98684203654908]
We present a simple yet efficient method named FaceSwapper, for one-shot face swapping based on Generative Adversarial Networks.
Our method consists of a disentangled representation module and a semantic-guided fusion module.
Our results show that our method achieves state-of-the-art results on benchmark with fewer training samples.
arXiv Detail & Related papers (2022-03-24T11:19:04Z) - FaceController: Controllable Attribute Editing for Face in the Wild [74.56117807309576]
We propose a simple feed-forward network to generate high-fidelity manipulated faces.
By simply employing some existing and easy-obtainable prior information, our method can control, transfer, and edit diverse attributes of faces in the wild.
In our method, we decouple identity, expression, pose, and illumination using 3D priors; separate texture and colors by using region-wise style codes.
arXiv Detail & Related papers (2021-02-23T02:47:28Z) - GuidedStyle: Attribute Knowledge Guided Style Manipulation for Semantic
Face Editing [39.57994147985615]
We propose a novel learning framework, called GuidedStyle, to achieve semantic face editing on StyleGAN.
Our method is able to perform disentangled and controllable edits along various attributes, including smiling, eyeglasses, gender, mustache and hair color.
arXiv Detail & Related papers (2020-12-22T06:53:31Z) - S2FGAN: Semantically Aware Interactive Sketch-to-Face Translation [11.724779328025589]
This paper proposes a sketch-to-image generation framework called S2FGAN.
We employ two latent spaces to control the face appearance and adjust the desired attributes of the generated face.
Our method successfully outperforms state-of-the-art methods on attribute manipulation by exploiting greater control of attribute intensity.
arXiv Detail & Related papers (2020-11-30T13:42:39Z) - SMILE: Semantically-guided Multi-attribute Image and Layout Editing [154.69452301122175]
Attribute image manipulation has been a very active topic since the introduction of Generative Adversarial Networks (GANs)
We present a multimodal representation that handles all attributes, be it guided by random noise or images, while only using the underlying domain information of the target domain.
Our method is capable of adding, removing or changing either fine-grained or coarse attributes by using an image as a reference or by exploring the style distribution space.
arXiv Detail & Related papers (2020-10-05T20:15:21Z) - Fine-grained Image-to-Image Transformation towards Visual Recognition [102.51124181873101]
We aim at transforming an image with a fine-grained category to synthesize new images that preserve the identity of the input image.
We adopt a model based on generative adversarial networks to disentangle the identity related and unrelated factors of an image.
Experiments on the CompCars and Multi-PIE datasets demonstrate that our model preserves the identity of the generated images much better than the state-of-the-art image-to-image transformation models.
arXiv Detail & Related papers (2020-01-12T05:26:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.