MulGAN: Facial Attribute Editing by Exemplar
- URL: http://arxiv.org/abs/1912.12396v1
- Date: Sat, 28 Dec 2019 04:02:15 GMT
- Title: MulGAN: Facial Attribute Editing by Exemplar
- Authors: Jingtao Guo, Zhenzhen Qian, Zuowei Zhou and Yi Liu
- Abstract summary: Methods encode attribute-related information in images into the predefined region of the latent feature space by employing a pair of images with opposite attributes as input to train model.
They suffer from three limitations: (1) the model must be trained using a pair of images with opposite attributes as input; (2) weak capability of editing multiple attributes by exemplars; and (3) poor quality of generating image.
- Score: 2.272764591035106
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies on face attribute editing by exemplars have achieved promising
results due to the increasing power of deep convolutional networks and
generative adversarial networks. These methods encode attribute-related
information in images into the predefined region of the latent feature space by
employing a pair of images with opposite attributes as input to train model,
the face attribute transfer between the input image and the exemplar can be
achieved by exchanging their attribute-related latent feature region. However,
they suffer from three limitations: (1) the model must be trained using a pair
of images with opposite attributes as input; (2) weak capability of editing
multiple attributes by exemplars; (3) poor quality of generating image. Instead
of imposing opposite-attribute constraints on the input image in order to make
the attribute information of images be encoded in the predefined region of the
latent feature space, in this work we directly apply the attribute labels
constraint to the predefined region of the latent feature space. Meanwhile, an
attribute classification loss is employed to make the model learn to extract
the attribute-related information of images into the predefined latent feature
region of the corresponding attribute, which enables our method to transfer
multiple attributes of the exemplar simultaneously. Besides, a novel model
structure is designed to enhance attribute transfer capabilities by exemplars
while improve the quality of the generated image. Experiments demonstrate the
effectiveness of our model on overcoming the above three limitations by
comparing with other methods on the CelebA dataset.
Related papers
- SAT3D: Image-driven Semantic Attribute Transfer in 3D [31.087615253643975]
We propose an image-driven Semantic Attribute Transfer method in 3D (SAT3D) by editing semantic attributes from a reference image.
For guidance, we associate each attribute with a set of phrase-based descriptor groups, and develop a Quantitative Measurement Module (QMM)
We present our 3D-aware attribute transfer results across multiple domains and also conduct comparisons with classical 2D image editing methods.
arXiv Detail & Related papers (2024-08-03T04:41:46Z) - Attribute-Aware Deep Hashing with Self-Consistency for Large-Scale
Fine-Grained Image Retrieval [65.43522019468976]
We propose attribute-aware hashing networks with self-consistency for generating attribute-aware hash codes.
We develop an encoder-decoder structure network of a reconstruction task to unsupervisedly distill high-level attribute-specific vectors.
Our models are equipped with a feature decorrelation constraint upon these attribute vectors to strengthen their representative abilities.
arXiv Detail & Related papers (2023-11-21T08:20:38Z) - Leveraging Off-the-shelf Diffusion Model for Multi-attribute Fashion
Image Manipulation [27.587905673112473]
Fashion attribute editing is a task that aims to convert the semantic attributes of a given fashion image while preserving the irrelevant regions.
Previous works typically employ conditional GANs where the generator explicitly learns the target attributes and directly execute the conversion.
We explore the classifier-guided diffusion that leverages the off-the-shelf diffusion model pretrained on general visual semantics such as Imagenet.
arXiv Detail & Related papers (2022-10-12T02:21:18Z) - Attribute Prototype Network for Any-Shot Learning [113.50220968583353]
We argue that an image representation with integrated attribute localization ability would be beneficial for any-shot, i.e. zero-shot and few-shot, image classification tasks.
We propose a novel representation learning framework that jointly learns global and local features using only class-level attributes.
arXiv Detail & Related papers (2022-04-04T02:25:40Z) - Attributes Aware Face Generation with Generative Adversarial Networks [133.44359317633686]
We propose a novel attributes aware face image generator method with generative adversarial networks called AFGAN.
Three stacked generators generate $64 times 64$, $128 times 128$ and $256 times 256$ resolution face images respectively.
In addition, an image-attribute matching loss is proposed to enhance the correlation between the generated images and input attributes.
arXiv Detail & Related papers (2020-12-03T09:25:50Z) - CAFE-GAN: Arbitrary Face Attribute Editing with Complementary Attention
Feature [31.425326840578098]
We propose a novel GAN model which is designed to edit only the parts of a face pertinent to the target attributes.
CAFE identifies the facial regions to be transformed by considering both target attributes as well as complementary attributes.
arXiv Detail & Related papers (2020-11-24T05:21:03Z) - SMILE: Semantically-guided Multi-attribute Image and Layout Editing [154.69452301122175]
Attribute image manipulation has been a very active topic since the introduction of Generative Adversarial Networks (GANs)
We present a multimodal representation that handles all attributes, be it guided by random noise or images, while only using the underlying domain information of the target domain.
Our method is capable of adding, removing or changing either fine-grained or coarse attributes by using an image as a reference or by exploring the style distribution space.
arXiv Detail & Related papers (2020-10-05T20:15:21Z) - MU-GAN: Facial Attribute Editing based on Multi-attention Mechanism [12.762892831902349]
We propose a Multi-attention U-Net-based Generative Adversarial Network (MU-GAN)
First, we replace a classic convolutional encoder-decoder with a symmetric U-Net-like structure in a generator.
Second, a self-attention mechanism is incorporated into convolutional layers for modeling long-range and multi-level dependencies.
arXiv Detail & Related papers (2020-09-09T09:25:04Z) - Prominent Attribute Modification using Attribute Dependent Generative
Adversarial Network [4.654937118111992]
The proposed approach is based on two generators and two discriminators that utilize the binary as well as the real representation of the attributes.
Experiments on the CelebA dataset show that our method effectively performs the multiple attribute editing with preserving other facial details intactly.
arXiv Detail & Related papers (2020-04-24T13:38:05Z) - Attribute-based Regularization of Latent Spaces for Variational
Auto-Encoders [79.68916470119743]
We present a novel method to structure the latent space of a Variational Auto-Encoder (VAE) to encode different continuous-valued attributes explicitly.
This is accomplished by using an attribute regularization loss which enforces a monotonic relationship between the attribute values and the latent code of the dimension along which the attribute is to be encoded.
arXiv Detail & Related papers (2020-04-11T20:53:13Z) - Attribute Mix: Semantic Data Augmentation for Fine Grained Recognition [102.45926816660665]
We propose Attribute Mix, a data augmentation strategy at attribute level to expand the fine-grained samples.
The principle lies in that attribute features are shared among fine-grained sub-categories, and can be seamlessly transferred among images.
arXiv Detail & Related papers (2020-04-06T14:06:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.