HumanGAN: A Generative Model of Humans Images
- URL: http://arxiv.org/abs/2103.06902v1
- Date: Thu, 11 Mar 2021 19:00:38 GMT
- Title: HumanGAN: A Generative Model of Humans Images
- Authors: Kripasindhu Sarkar and Lingjie Liu and Vladislav Golyanik and
Christian Theobalt
- Abstract summary: We present a generative model for images of dressed humans offering control over pose, local body part appearance and garment style.
Our model encodes part-based latent appearance vectors in a normalized pose-independent space and warps them to different poses, it preserves body and clothing appearance under varying posture.
- Score: 78.6284090004218
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks achieve great performance in photorealistic
image synthesis in various domains, including human images. However, they
usually employ latent vectors that encode the sampled outputs globally. This
does not allow convenient control of semantically-relevant individual parts of
the image, and is not able to draw samples that only differ in partial aspects,
such as clothing style. We address these limitations and present a generative
model for images of dressed humans offering control over pose, local body part
appearance and garment style. This is the first method to solve various aspects
of human image generation such as global appearance sampling, pose transfer,
parts and garment transfer, and parts sampling jointly in a unified framework.
As our model encodes part-based latent appearance vectors in a normalized
pose-independent space and warps them to different poses, it preserves body and
clothing appearance under varying posture. Experiments show that our flexible
and general generative method outperforms task-specific baselines for
pose-conditioned image generation, pose transfer and part sampling in terms of
realism and output resolution.
Related papers
- From Parts to Whole: A Unified Reference Framework for Controllable Human Image Generation [19.096741614175524]
Parts2Whole is a novel framework designed for generating customized portraits from multiple reference images.
We first develop a semantic-aware appearance encoder to retain details of different human parts.
Second, our framework supports multi-image conditioned generation through a shared self-attention mechanism.
arXiv Detail & Related papers (2024-04-23T17:56:08Z) - InsetGAN for Full-Body Image Generation [90.71033704904629]
We propose a novel method to combine multiple pretrained GANs.
One GAN generates a global canvas (e.g., human body) and a set of specialized GANs, or insets, focus on different parts.
We demonstrate the setup by combining a full body GAN with a dedicated high-quality face GAN to produce plausible-looking humans.
arXiv Detail & Related papers (2022-03-14T17:01:46Z) - PISE: Person Image Synthesis and Editing with Decoupled GAN [64.70360318367943]
We propose PISE, a novel two-stage generative model for Person Image Synthesis and Editing.
For human pose transfer, we first synthesize a human parsing map aligned with the target pose to represent the shape of clothing.
To decouple the shape and style of clothing, we propose joint global and local per-region encoding and normalization.
arXiv Detail & Related papers (2021-03-06T04:32:06Z) - Style and Pose Control for Image Synthesis of Humans from a Single
Monocular View [78.6284090004218]
StylePoseGAN is a non-controllable generator to accept conditioning of pose and appearance separately.
Our network can be trained in a fully supervised way with human images to disentangle pose, appearance and body parts.
StylePoseGAN achieves state-of-the-art image generation fidelity on common perceptual metrics.
arXiv Detail & Related papers (2021-02-22T18:50:47Z) - Pose-Guided Human Animation from a Single Image in the Wild [83.86903892201656]
We present a new pose transfer method for synthesizing a human animation from a single image of a person controlled by a sequence of body poses.
Existing pose transfer methods exhibit significant visual artifacts when applying to a novel scene.
We design a compositional neural network that predicts the silhouette, garment labels, and textures.
We are able to synthesize human animations that can preserve the identity and appearance of the person in a temporally coherent way without any fine-tuning of the network on the testing scene.
arXiv Detail & Related papers (2020-12-07T15:38:29Z) - Generating Person Images with Appearance-aware Pose Stylizer [66.44220388377596]
We present a novel end-to-end framework to generate realistic person images based on given person poses and appearances.
The core of our framework is a novel generator called Appearance-aware Pose Stylizer (APS) which generates human images by coupling the target pose with the conditioned person appearance progressively.
arXiv Detail & Related papers (2020-07-17T15:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.