Local Facial Attribute Transfer through Inpainting
- URL: http://arxiv.org/abs/2002.03040v2
- Date: Mon, 12 Oct 2020 09:07:54 GMT
- Title: Local Facial Attribute Transfer through Inpainting
- Authors: Ricard Durall, Franz-Josef Pfreundt, Janis Keuper
- Abstract summary: The term attribute transfer refers to the tasks of altering images in such a way, that the semantic interpretation of a given input image is shifted towards an intended direction.
Recent advances in attribute transfer are mostly based on generative deep neural networks, using various techniques to manipulate images in the latent space of the generator.
We present a novel method for the common sub-task of local attribute transfers, where only parts of a face have to be altered in order to achieve semantic changes.
- Score: 3.4376560669160394
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The term attribute transfer refers to the tasks of altering images in such a
way, that the semantic interpretation of a given input image is shifted towards
an intended direction, which is quantified by semantic attributes. Prominent
example applications are photo realistic changes of facial features and
expressions, like changing the hair color, adding a smile, enlarging the nose
or altering the entire context of a scene, like transforming a summer landscape
into a winter panorama. Recent advances in attribute transfer are mostly based
on generative deep neural networks, using various techniques to manipulate
images in the latent space of the generator.
In this paper, we present a novel method for the common sub-task of local
attribute transfers, where only parts of a face have to be altered in order to
achieve semantic changes (e.g. removing a mustache). In contrast to previous
methods, where such local changes have been implemented by generating new
(global) images, we propose to formulate local attribute transfers as an
inpainting problem. Removing and regenerating only parts of images, our
Attribute Transfer Inpainting Generative Adversarial Network (ATI-GAN) is able
to utilize local context information to focus on the attributes while keeping
the background unmodified resulting in visually sound results.
Related papers
- When StyleGAN Meets Stable Diffusion: a $\mathscr{W}_+$ Adapter for
Personalized Image Generation [60.305112612629465]
Text-to-image diffusion models have excelled in producing diverse, high-quality, and photo-realistic images.
We present a novel use of the extended StyleGAN embedding space $mathcalW_+$ to achieve enhanced identity preservation and disentanglement for diffusion models.
Our method adeptly generates personalized text-to-image outputs that are not only compatible with prompt descriptions but also amenable to common StyleGAN editing directions.
arXiv Detail & Related papers (2023-11-29T09:05:14Z) - FacialGAN: Style Transfer and Attribute Manipulation on Synthetic Faces [9.664892091493586]
FacialGAN is a novel framework enabling simultaneous rich style transfers and interactive facial attributes manipulation.
We show our model's capacity in producing visually compelling results in style transfer, attribute manipulation, diversity and face verification.
arXiv Detail & Related papers (2021-10-18T15:53:38Z) - Context-Aware Image Inpainting with Learned Semantic Priors [100.99543516733341]
We introduce pretext tasks that are semantically meaningful to estimating the missing contents.
We propose a context-aware image inpainting model, which adaptively integrates global semantics and local features.
arXiv Detail & Related papers (2021-06-14T08:09:43Z) - Unsupervised Image Transformation Learning via Generative Adversarial
Networks [40.84518581293321]
We study the image transformation problem by learning the underlying transformations from a collection of images using Generative Adversarial Networks (GANs)
We propose an unsupervised learning framework, termed as TrGAN, to project images onto a transformation space that is shared by the generator and the discriminator.
arXiv Detail & Related papers (2021-03-13T17:08:19Z) - Enjoy Your Editing: Controllable GANs for Image Editing via Latent Space
Navigation [136.53288628437355]
Controllable semantic image editing enables a user to change entire image attributes with few clicks.
Current approaches often suffer from attribute edits that are entangled, global image identity changes, and diminished photo-realism.
We propose quantitative evaluation strategies for measuring controllable editing performance, unlike prior work which primarily focuses on qualitative evaluation.
arXiv Detail & Related papers (2021-02-01T21:38:36Z) - CAFE-GAN: Arbitrary Face Attribute Editing with Complementary Attention
Feature [31.425326840578098]
We propose a novel GAN model which is designed to edit only the parts of a face pertinent to the target attributes.
CAFE identifies the facial regions to be transformed by considering both target attributes as well as complementary attributes.
arXiv Detail & Related papers (2020-11-24T05:21:03Z) - SMILE: Semantically-guided Multi-attribute Image and Layout Editing [154.69452301122175]
Attribute image manipulation has been a very active topic since the introduction of Generative Adversarial Networks (GANs)
We present a multimodal representation that handles all attributes, be it guided by random noise or images, while only using the underlying domain information of the target domain.
Our method is capable of adding, removing or changing either fine-grained or coarse attributes by using an image as a reference or by exploring the style distribution space.
arXiv Detail & Related papers (2020-10-05T20:15:21Z) - Semantic Image Manipulation Using Scene Graphs [105.03614132953285]
We introduce a-semantic scene graph network that does not require direct supervision for constellation changes or image edits.
This makes possible to train the system from existing real-world datasets with no additional annotation effort.
arXiv Detail & Related papers (2020-04-07T20:02:49Z) - Steering Self-Supervised Feature Learning Beyond Local Pixel Statistics [60.92229707497999]
We introduce a novel principle for self-supervised feature learning based on the discrimination of specific transformations of an image.
We demonstrate experimentally that learning to discriminate transformations such as LCI, image warping and rotations, yields features with state of the art generalization capabilities.
arXiv Detail & Related papers (2020-04-05T22:09:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.