MagGAN: High-Resolution Face Attribute Editing with Mask-Guided
Generative Adversarial Network
- URL: http://arxiv.org/abs/2010.01424v1
- Date: Sat, 3 Oct 2020 20:56:16 GMT
- Title: MagGAN: High-Resolution Face Attribute Editing with Mask-Guided
Generative Adversarial Network
- Authors: Yi Wei, Zhe Gan, Wenbo Li, Siwei Lyu, Ming-Ching Chang, Lei Zhang,
Jianfeng Gao, Pengchuan Zhang
- Abstract summary: MagGAN learns to only edit the facial parts that are relevant to the desired attribute changes.
A novel mask-guided conditioning strategy is introduced to incorporate the influence region of each attribute change into the generator.
A multi-level patch-wise discriminator structure is proposed to scale our model for high-resolution ($1024 times 1024$) face editing.
- Score: 145.4591079418917
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present Mask-guided Generative Adversarial Network (MagGAN) for
high-resolution face attribute editing, in which semantic facial masks from a
pre-trained face parser are used to guide the fine-grained image editing
process. With the introduction of a mask-guided reconstruction loss, MagGAN
learns to only edit the facial parts that are relevant to the desired attribute
changes, while preserving the attribute-irrelevant regions (e.g., hat, scarf
for modification `To Bald'). Further, a novel mask-guided conditioning strategy
is introduced to incorporate the influence region of each attribute change into
the generator. In addition, a multi-level patch-wise discriminator structure is
proposed to scale our model for high-resolution ($1024 \times 1024$) face
editing. Experiments on the CelebA benchmark show that the proposed method
significantly outperforms prior state-of-the-art approaches in terms of both
image quality and editing performance.
Related papers
- Mitigating the Impact of Attribute Editing on Face Recognition [14.138965856511387]
We show that facial attribute editing using modern generative AI models can severely degrade automated face recognition systems.
We propose two novel techniques for local and global attribute editing.
arXiv Detail & Related papers (2024-03-12T22:03:19Z) - Variance-insensitive and Target-preserving Mask Refinement for
Interactive Image Segmentation [68.16510297109872]
Point-based interactive image segmentation can ease the burden of mask annotation in applications such as semantic segmentation and image editing.
We introduce a novel method, Variance-Insensitive and Target-Preserving Mask Refinement to enhance segmentation quality with fewer user inputs.
Experiments on GrabCut, Berkeley, SBD, and DAVIS datasets demonstrate our method's state-of-the-art performance in interactive image segmentation.
arXiv Detail & Related papers (2023-12-22T02:31:31Z) - ManiCLIP: Multi-Attribute Face Manipulation from Text [104.30600573306991]
We present a novel multi-attribute face manipulation method based on textual descriptions.
Our method generates natural manipulated faces with minimal text-irrelevant attribute editing.
arXiv Detail & Related papers (2022-10-02T07:22:55Z) - Image Inpainting by End-to-End Cascaded Refinement with Mask Awareness [66.55719330810547]
Inpainting arbitrary missing regions is challenging because learning valid features for various masked regions is nontrivial.
We propose a novel mask-aware inpainting solution that learns multi-scale features for missing regions in the encoding phase.
Our framework is validated both quantitatively and qualitatively via extensive experiments on three public datasets.
arXiv Detail & Related papers (2021-04-28T13:17:47Z) - High Resolution Face Editing with Masked GAN Latent Code Optimization [0.0]
Face editing is a popular research topic in the computer vision community.
Recent proposed methods are based on either training a conditional encoder-decoder Generative Adversarial Network (GAN) in an end-to-end fashion or on defining an operation in the latent space of a pre-trained vanilla GAN generator model.
We propose a GAN embedding optimization procedure with spatial and semantic constraints.
arXiv Detail & Related papers (2021-03-20T08:39:41Z) - S2FGAN: Semantically Aware Interactive Sketch-to-Face Translation [11.724779328025589]
This paper proposes a sketch-to-image generation framework called S2FGAN.
We employ two latent spaces to control the face appearance and adjust the desired attributes of the generated face.
Our method successfully outperforms state-of-the-art methods on attribute manipulation by exploiting greater control of attribute intensity.
arXiv Detail & Related papers (2020-11-30T13:42:39Z) - CAFE-GAN: Arbitrary Face Attribute Editing with Complementary Attention
Feature [31.425326840578098]
We propose a novel GAN model which is designed to edit only the parts of a face pertinent to the target attributes.
CAFE identifies the facial regions to be transformed by considering both target attributes as well as complementary attributes.
arXiv Detail & Related papers (2020-11-24T05:21:03Z) - PA-GAN: Progressive Attention Generative Adversarial Network for Facial
Attribute Editing [67.94255549416548]
We propose a progressive attention GAN (PA-GAN) for facial attribute editing.
Our approach achieves correct attribute editing with irrelevant details much better preserved compared with the state-of-the-arts.
arXiv Detail & Related papers (2020-07-12T03:04:12Z) - Reference-guided Face Component Editing [51.29105560090321]
We propose a novel framework termed r-FACE (Reference-guided FAce Component Editing) for diverse and controllable face component editing.
Specifically, r-FACE takes an image inpainting model as the backbone, utilizing reference images as conditions for controlling the shape of face components.
In order to encourage the framework to concentrate on the target face components, an example-guided attention module is designed to fuse attention features and the target face component features extracted from the reference image.
arXiv Detail & Related papers (2020-06-03T05:34:54Z) - Exemplar-based Generative Facial Editing [2.272764591035106]
We propose a novel generative approach for exemplar based facial editing in the form of the region inpainting.
Experimental results demonstrate our method can produce diverse and personalized face editing results.
arXiv Detail & Related papers (2020-05-31T09:15:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.