Shape-aware Generative Adversarial Networks for Attribute Transfer
- URL: http://arxiv.org/abs/2010.05259v1
- Date: Sun, 11 Oct 2020 14:52:32 GMT
- Title: Shape-aware Generative Adversarial Networks for Attribute Transfer
- Authors: Lei Luo, William Hsu, and Shangxian Wang
- Abstract summary: We introduce a shape-aware GAN model that is able to preserve shape when transferring attributes.
Compared to other state-of-art GANs-based image-to-image translation models, the model we propose is able to generate more visually appealing results.
- Score: 10.786099674296986
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks (GANs) have been successfully applied to
transfer visual attributes in many domains, including that of human face
images. This success is partly attributable to the facts that human faces have
similar shapes and the positions of eyes, noses, and mouths are fixed among
different people. Attribute transfer is more challenging when the source and
target domain share different shapes. In this paper, we introduce a shape-aware
GAN model that is able to preserve shape when transferring attributes, and
propose its application to some real-world domains. Compared to other
state-of-art GANs-based image-to-image translation models, the model we propose
is able to generate more visually appealing results while maintaining the
quality of results from transfer learning.
Related papers
- Face Identity-Aware Disentanglement in StyleGAN [15.753131748318335]
We introduce PluGeN4Faces, a plugin to StyleGAN, which explicitly disentangles face attributes from a person's identity.
Our experiments demonstrate that the modifications of face attributes performed by PluGeN4Faces are significantly less invasive on the remaining characteristics of the image than in the existing state-of-the-art models.
arXiv Detail & Related papers (2023-09-21T12:54:09Z) - Face Transformer: Towards High Fidelity and Accurate Face Swapping [54.737909435708936]
Face swapping aims to generate swapped images that fuse the identity of source faces and the attributes of target faces.
This paper presents Face Transformer, a novel face swapping network that can accurately preserve source identities and target attributes simultaneously.
arXiv Detail & Related papers (2023-04-05T15:51:44Z) - Polymorphic-GAN: Generating Aligned Samples across Multiple Domains with
Learned Morph Maps [94.10535575563092]
We introduce a generative adversarial network that can simultaneously generate aligned image samples from multiple related domains.
We propose Polymorphic-GAN which learns shared features across all domains and a per-domain morph layer to morph shared features according to each domain.
arXiv Detail & Related papers (2022-06-06T21:03:02Z) - Transferring Knowledge with Attention Distillation for Multi-Domain
Image-to-Image Translation [28.272982411879845]
We show how gradient-based attentions can be used as knowledge to be conveyed in a teacher-student paradigm for image-to-image translation tasks.
It is also demonstrated how "pseudo"-attentions can also be employed during training when teacher and student networks are trained on different domains.
arXiv Detail & Related papers (2021-08-17T06:47:04Z) - Generating Furry Cars: Disentangling Object Shape & Appearance across
Multiple Domains [46.55517346455773]
We consider the novel task of learning disentangled representations of object shape and appearance across multiple domains.
We learn a generative model that learns an intermediate distribution, which borrows a subset of properties from each domain.
This challenge requires an accurate disentanglement of object shape, appearance, and background from each domain.
arXiv Detail & Related papers (2021-04-05T17:59:15Z) - SMILE: Semantically-guided Multi-attribute Image and Layout Editing [154.69452301122175]
Attribute image manipulation has been a very active topic since the introduction of Generative Adversarial Networks (GANs)
We present a multimodal representation that handles all attributes, be it guided by random noise or images, while only using the underlying domain information of the target domain.
Our method is capable of adding, removing or changing either fine-grained or coarse attributes by using an image as a reference or by exploring the style distribution space.
arXiv Detail & Related papers (2020-10-05T20:15:21Z) - DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition [94.96686189033869]
We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
arXiv Detail & Related papers (2020-02-23T08:16:34Z) - Fine-grained Image-to-Image Transformation towards Visual Recognition [102.51124181873101]
We aim at transforming an image with a fine-grained category to synthesize new images that preserve the identity of the input image.
We adopt a model based on generative adversarial networks to disentangle the identity related and unrelated factors of an image.
Experiments on the CompCars and Multi-PIE datasets demonstrate that our model preserves the identity of the generated images much better than the state-of-the-art image-to-image transformation models.
arXiv Detail & Related papers (2020-01-12T05:26:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.