Attributes Aware Face Generation with Generative Adversarial Networks
- URL: http://arxiv.org/abs/2012.01782v1
- Date: Thu, 3 Dec 2020 09:25:50 GMT
- Title: Attributes Aware Face Generation with Generative Adversarial Networks
- Authors: Zheng Yuan, Jie Zhang, Shiguang Shan, Xilin Chen
- Abstract summary: We propose a novel attributes aware face image generator method with generative adversarial networks called AFGAN.
Three stacked generators generate $64 times 64$, $128 times 128$ and $256 times 256$ resolution face images respectively.
In addition, an image-attribute matching loss is proposed to enhance the correlation between the generated images and input attributes.
- Score: 133.44359317633686
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies have shown remarkable success in face image generations.
However, most of the existing methods only generate face images from random
noise, and cannot generate face images according to the specific attributes. In
this paper, we focus on the problem of face synthesis from attributes, which
aims at generating faces with specific characteristics corresponding to the
given attributes. To this end, we propose a novel attributes aware face image
generator method with generative adversarial networks called AFGAN.
Specifically, we firstly propose a two-path embedding layer and self-attention
mechanism to convert binary attribute vector to rich attribute features. Then
three stacked generators generate $64 \times 64$, $128 \times 128$ and $256
\times 256$ resolution face images respectively by taking the attribute
features as input. In addition, an image-attribute matching loss is proposed to
enhance the correlation between the generated images and input attributes.
Extensive experiments on CelebA demonstrate the superiority of our AFGAN in
terms of both qualitative and quantitative evaluations.
Related papers
- G2Face: High-Fidelity Reversible Face Anonymization via Generative and Geometric Priors [71.69161292330504]
Reversible face anonymization seeks to replace sensitive identity information in facial images with synthesized alternatives.
This paper introduces Gtextsuperscript2Face, which leverages both generative and geometric priors to enhance identity manipulation.
Our method outperforms existing state-of-the-art techniques in face anonymization and recovery, while preserving high data utility.
arXiv Detail & Related papers (2024-08-18T12:36:47Z) - PrefGen: Preference Guided Image Generation with Relative Attributes [5.0741409008225755]
Deep generative models have the capacity to render high fidelity images of content like human faces.
We develop the $textitPrefGen$ system, which allows users to control the relative attributes of generated images.
We demonstrate the success of this approach using a StyleGAN2 generator on the task of human face editing.
arXiv Detail & Related papers (2023-04-01T00:41:51Z) - Attribute Controllable Beautiful Caucasian Face Generation by Aesthetics
Driven Reinforcement Learning [21.329906392100884]
We build the techniques of reinforcement learning into the generator of EigenGAN.
The agent tries to figure out how to alter the semantic attributes of the generated human faces towards more preferable ones.
We present a new variant incorporating the ingredients emerging in the reinforcement learning communities in recent years.
arXiv Detail & Related papers (2022-08-09T03:04:10Z) - Identity and Attribute Preserving Thumbnail Upscaling [93.38607559281601]
We consider the task of upscaling a low resolution thumbnail image of a person, to a higher resolution image, which preserves the person's identity and other attributes.
Our results indicate an improvement in face similarity recognition and lookalike generation as well as in the ability to generate higher resolution images which preserve an input thumbnail identity and whose race and attributes are maintained.
arXiv Detail & Related papers (2021-05-30T19:32:27Z) - Multimodal Face Synthesis from Visual Attributes [85.87796260802223]
We propose a novel generative adversarial network that simultaneously synthesizes identity preserving multimodal face images.
multimodal stretch-in modules are introduced in the discriminator which discriminates between real and fake images.
arXiv Detail & Related papers (2021-04-09T13:47:23Z) - CAFE-GAN: Arbitrary Face Attribute Editing with Complementary Attention
Feature [31.425326840578098]
We propose a novel GAN model which is designed to edit only the parts of a face pertinent to the target attributes.
CAFE identifies the facial regions to be transformed by considering both target attributes as well as complementary attributes.
arXiv Detail & Related papers (2020-11-24T05:21:03Z) - SMILE: Semantically-guided Multi-attribute Image and Layout Editing [154.69452301122175]
Attribute image manipulation has been a very active topic since the introduction of Generative Adversarial Networks (GANs)
We present a multimodal representation that handles all attributes, be it guided by random noise or images, while only using the underlying domain information of the target domain.
Our method is capable of adding, removing or changing either fine-grained or coarse attributes by using an image as a reference or by exploring the style distribution space.
arXiv Detail & Related papers (2020-10-05T20:15:21Z) - Prominent Attribute Modification using Attribute Dependent Generative
Adversarial Network [4.654937118111992]
The proposed approach is based on two generators and two discriminators that utilize the binary as well as the real representation of the attributes.
Experiments on the CelebA dataset show that our method effectively performs the multiple attribute editing with preserving other facial details intactly.
arXiv Detail & Related papers (2020-04-24T13:38:05Z) - Facial Attribute Capsules for Noise Face Super Resolution [86.55076473929965]
Existing face super-resolution (SR) methods mainly assume the input image to be noise-free.
We propose a Facial Attribute Capsules Network (FACN) to deal with the problem of high-scale super-resolution of noisy face image.
Our method achieves superior hallucination results and outperforms state-of-the-art for very low resolution (LR) noise face image super resolution.
arXiv Detail & Related papers (2020-02-16T06:22:28Z) - MulGAN: Facial Attribute Editing by Exemplar [2.272764591035106]
Methods encode attribute-related information in images into the predefined region of the latent feature space by employing a pair of images with opposite attributes as input to train model.
They suffer from three limitations: (1) the model must be trained using a pair of images with opposite attributes as input; (2) weak capability of editing multiple attributes by exemplars; and (3) poor quality of generating image.
arXiv Detail & Related papers (2019-12-28T04:02:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.