Prominent Attribute Modification using Attribute Dependent Generative
Adversarial Network
- URL: http://arxiv.org/abs/2005.02122v1
- Date: Fri, 24 Apr 2020 13:38:05 GMT
- Title: Prominent Attribute Modification using Attribute Dependent Generative
Adversarial Network
- Authors: Naeem Ul Islam, Sungmin Lee, and Jaebyung Park
- Abstract summary: The proposed approach is based on two generators and two discriminators that utilize the binary as well as the real representation of the attributes.
Experiments on the CelebA dataset show that our method effectively performs the multiple attribute editing with preserving other facial details intactly.
- Score: 4.654937118111992
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modifying the facial images with desired attributes is important, though
challenging tasks in computer vision, where it aims to modify single or
multiple attributes of the face image. Some of the existing methods are either
based on attribute independent approaches where the modification is done in the
latent representation or attribute dependent approaches. The attribute
independent methods are limited in performance as they require the desired
paired data for changing the desired attributes. Secondly, the attribute
independent constraint may result in the loss of information and, hence, fail
in generating the required attributes in the face image. In contrast, the
attribute dependent approaches are effective as these approaches are capable of
modifying the required features along with preserving the information in the
given image. However, attribute dependent approaches are sensitive and require
a careful model design in generating high-quality results. To address this
problem, we propose an attribute dependent face modification approach. The
proposed approach is based on two generators and two discriminators that
utilize the binary as well as the real representation of the attributes and, in
return, generate high-quality attribute modification results. Experiments on
the CelebA dataset show that our method effectively performs the multiple
attribute editing with preserving other facial details intactly.
Related papers
- Exploring Attribute Variations in Style-based GANs using Diffusion
Models [48.98081892627042]
We formulate the task of textitdiverse attribute editing by modeling the multidimensional nature of attribute edits.
We capitalize on disentangled latent spaces of pretrained GANs and train a Denoising Diffusion Probabilistic Model (DDPM) to learn the latent distribution for diverse edits.
arXiv Detail & Related papers (2023-11-27T18:14:03Z) - A Solution to Co-occurrence Bias: Attributes Disentanglement via Mutual
Information Minimization for Pedestrian Attribute Recognition [10.821982414387525]
We show that current methods can actually suffer in generalizing such fitted attributes interdependencies onto scenes or identities off the dataset distribution.
To render models robust in realistic scenes, we propose the attributes-disentangled feature learning to ensure the recognition of an attribute not inferring on the existence of others.
arXiv Detail & Related papers (2023-07-28T01:34:55Z) - ManiCLIP: Multi-Attribute Face Manipulation from Text [104.30600573306991]
We present a novel multi-attribute face manipulation method based on textual descriptions.
Our method generates natural manipulated faces with minimal text-irrelevant attribute editing.
arXiv Detail & Related papers (2022-10-02T07:22:55Z) - Supervised Attribute Information Removal and Reconstruction for Image
Manipulation [15.559224431459551]
We propose an Attribute Information Removal and Reconstruction (AIRR) network that prevents such information hiding.
We evaluate our approach on four diverse datasets with a variety of attributes including DeepFashion Synthesis, DeepFashion Fine-grained Attribute, CelebA and CelebA-HQ.
arXiv Detail & Related papers (2022-07-13T23:30:44Z) - TransFA: Transformer-based Representation for Face Attribute Evaluation [87.09529826340304]
We propose a novel textbftransformer-based representation for textbfattribute evaluation method (textbfTransFA)
The proposed TransFA achieves superior performances compared with state-of-the-art methods.
arXiv Detail & Related papers (2022-07-12T10:58:06Z) - Attributes Aware Face Generation with Generative Adversarial Networks [133.44359317633686]
We propose a novel attributes aware face image generator method with generative adversarial networks called AFGAN.
Three stacked generators generate $64 times 64$, $128 times 128$ and $256 times 256$ resolution face images respectively.
In addition, an image-attribute matching loss is proposed to enhance the correlation between the generated images and input attributes.
arXiv Detail & Related papers (2020-12-03T09:25:50Z) - CAFE-GAN: Arbitrary Face Attribute Editing with Complementary Attention
Feature [31.425326840578098]
We propose a novel GAN model which is designed to edit only the parts of a face pertinent to the target attributes.
CAFE identifies the facial regions to be transformed by considering both target attributes as well as complementary attributes.
arXiv Detail & Related papers (2020-11-24T05:21:03Z) - SMILE: Semantically-guided Multi-attribute Image and Layout Editing [154.69452301122175]
Attribute image manipulation has been a very active topic since the introduction of Generative Adversarial Networks (GANs)
We present a multimodal representation that handles all attributes, be it guided by random noise or images, while only using the underlying domain information of the target domain.
Our method is capable of adding, removing or changing either fine-grained or coarse attributes by using an image as a reference or by exploring the style distribution space.
arXiv Detail & Related papers (2020-10-05T20:15:21Z) - MU-GAN: Facial Attribute Editing based on Multi-attention Mechanism [12.762892831902349]
We propose a Multi-attention U-Net-based Generative Adversarial Network (MU-GAN)
First, we replace a classic convolutional encoder-decoder with a symmetric U-Net-like structure in a generator.
Second, a self-attention mechanism is incorporated into convolutional layers for modeling long-range and multi-level dependencies.
arXiv Detail & Related papers (2020-09-09T09:25:04Z) - Attribute Mix: Semantic Data Augmentation for Fine Grained Recognition [102.45926816660665]
We propose Attribute Mix, a data augmentation strategy at attribute level to expand the fine-grained samples.
The principle lies in that attribute features are shared among fine-grained sub-categories, and can be seamlessly transferred among images.
arXiv Detail & Related papers (2020-04-06T14:06:47Z) - MulGAN: Facial Attribute Editing by Exemplar [2.272764591035106]
Methods encode attribute-related information in images into the predefined region of the latent feature space by employing a pair of images with opposite attributes as input to train model.
They suffer from three limitations: (1) the model must be trained using a pair of images with opposite attributes as input; (2) weak capability of editing multiple attributes by exemplars; and (3) poor quality of generating image.
arXiv Detail & Related papers (2019-12-28T04:02:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.