CAFE-GAN: Arbitrary Face Attribute Editing with Complementary Attention
Feature
- URL: http://arxiv.org/abs/2011.11900v1
- Date: Tue, 24 Nov 2020 05:21:03 GMT
- Title: CAFE-GAN: Arbitrary Face Attribute Editing with Complementary Attention
Feature
- Authors: Jeong-gi Kwak, David K. Han, Hanseok Ko
- Abstract summary: We propose a novel GAN model which is designed to edit only the parts of a face pertinent to the target attributes.
CAFE identifies the facial regions to be transformed by considering both target attributes as well as complementary attributes.
- Score: 31.425326840578098
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The goal of face attribute editing is altering a facial image according to
given target attributes such as hair color, mustache, gender, etc. It belongs
to the image-to-image domain transfer problem with a set of attributes
considered as a distinctive domain. There have been some works in multi-domain
transfer problem focusing on facial attribute editing employing Generative
Adversarial Network (GAN). These methods have reported some successes but they
also result in unintended changes in facial regions - meaning the generator
alters regions unrelated to the specified attributes. To address this
unintended altering problem, we propose a novel GAN model which is designed to
edit only the parts of a face pertinent to the target attributes by the concept
of Complementary Attention Feature (CAFE). CAFE identifies the facial regions
to be transformed by considering both target attributes as well as
complementary attributes, which we define as those attributes absent in the
input facial image. In addition, we introduce a complementary feature matching
to help in training the generator for utilizing the spatial information of
attributes. Effectiveness of the proposed method is demonstrated by analysis
and comparison study with state-of-the-art methods.
Related papers
- ManiCLIP: Multi-Attribute Face Manipulation from Text [104.30600573306991]
We present a novel multi-attribute face manipulation method based on textual descriptions.
Our method generates natural manipulated faces with minimal text-irrelevant attribute editing.
arXiv Detail & Related papers (2022-10-02T07:22:55Z) - TransFA: Transformer-based Representation for Face Attribute Evaluation [87.09529826340304]
We propose a novel textbftransformer-based representation for textbfattribute evaluation method (textbfTransFA)
The proposed TransFA achieves superior performances compared with state-of-the-art methods.
arXiv Detail & Related papers (2022-07-12T10:58:06Z) - Attributes Aware Face Generation with Generative Adversarial Networks [133.44359317633686]
We propose a novel attributes aware face image generator method with generative adversarial networks called AFGAN.
Three stacked generators generate $64 times 64$, $128 times 128$ and $256 times 256$ resolution face images respectively.
In addition, an image-attribute matching loss is proposed to enhance the correlation between the generated images and input attributes.
arXiv Detail & Related papers (2020-12-03T09:25:50Z) - SMILE: Semantically-guided Multi-attribute Image and Layout Editing [154.69452301122175]
Attribute image manipulation has been a very active topic since the introduction of Generative Adversarial Networks (GANs)
We present a multimodal representation that handles all attributes, be it guided by random noise or images, while only using the underlying domain information of the target domain.
Our method is capable of adding, removing or changing either fine-grained or coarse attributes by using an image as a reference or by exploring the style distribution space.
arXiv Detail & Related papers (2020-10-05T20:15:21Z) - MagGAN: High-Resolution Face Attribute Editing with Mask-Guided
Generative Adversarial Network [145.4591079418917]
MagGAN learns to only edit the facial parts that are relevant to the desired attribute changes.
A novel mask-guided conditioning strategy is introduced to incorporate the influence region of each attribute change into the generator.
A multi-level patch-wise discriminator structure is proposed to scale our model for high-resolution ($1024 times 1024$) face editing.
arXiv Detail & Related papers (2020-10-03T20:56:16Z) - MU-GAN: Facial Attribute Editing based on Multi-attention Mechanism [12.762892831902349]
We propose a Multi-attention U-Net-based Generative Adversarial Network (MU-GAN)
First, we replace a classic convolutional encoder-decoder with a symmetric U-Net-like structure in a generator.
Second, a self-attention mechanism is incorporated into convolutional layers for modeling long-range and multi-level dependencies.
arXiv Detail & Related papers (2020-09-09T09:25:04Z) - PA-GAN: Progressive Attention Generative Adversarial Network for Facial
Attribute Editing [67.94255549416548]
We propose a progressive attention GAN (PA-GAN) for facial attribute editing.
Our approach achieves correct attribute editing with irrelevant details much better preserved compared with the state-of-the-arts.
arXiv Detail & Related papers (2020-07-12T03:04:12Z) - Prominent Attribute Modification using Attribute Dependent Generative
Adversarial Network [4.654937118111992]
The proposed approach is based on two generators and two discriminators that utilize the binary as well as the real representation of the attributes.
Experiments on the CelebA dataset show that our method effectively performs the multiple attribute editing with preserving other facial details intactly.
arXiv Detail & Related papers (2020-04-24T13:38:05Z) - Local Facial Attribute Transfer through Inpainting [3.4376560669160394]
The term attribute transfer refers to the tasks of altering images in such a way, that the semantic interpretation of a given input image is shifted towards an intended direction.
Recent advances in attribute transfer are mostly based on generative deep neural networks, using various techniques to manipulate images in the latent space of the generator.
We present a novel method for the common sub-task of local attribute transfers, where only parts of a face have to be altered in order to achieve semantic changes.
arXiv Detail & Related papers (2020-02-07T22:57:01Z) - MulGAN: Facial Attribute Editing by Exemplar [2.272764591035106]
Methods encode attribute-related information in images into the predefined region of the latent feature space by employing a pair of images with opposite attributes as input to train model.
They suffer from three limitations: (1) the model must be trained using a pair of images with opposite attributes as input; (2) weak capability of editing multiple attributes by exemplars; and (3) poor quality of generating image.
arXiv Detail & Related papers (2019-12-28T04:02:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.