Human Pose Transfer by Adaptive Hierarchical Deformation
- URL: http://arxiv.org/abs/2012.06940v1
- Date: Sun, 13 Dec 2020 01:49:26 GMT
- Title: Human Pose Transfer by Adaptive Hierarchical Deformation
- Authors: Jinsong Zhang, Xingzi Liu, Kun Li
- Abstract summary: We propose an adaptive human pose transfer network with two hierarchical deformation levels.
The first level generates human semantic parsing aligned with the target pose.
The second level generates the final textured person image in the target pose with the semantic guidance.
- Score: 24.70009597455219
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human pose transfer, as a misaligned image generation task, is very
challenging. Existing methods cannot effectively utilize the input information,
which often fail to preserve the style and shape of hair and clothes. In this
paper, we propose an adaptive human pose transfer network with two hierarchical
deformation levels. The first level generates human semantic parsing aligned
with the target pose, and the second level generates the final textured person
image in the target pose with the semantic guidance. To avoid the drawback of
vanilla convolution that treats all the pixels as valid information, we use
gated convolution in both two levels to dynamically select the important
features and adaptively deform the image layer by layer. Our model has very few
parameters and is fast to converge. Experimental results demonstrate that our
model achieves better performance with more consistent hair, face and clothes
with fewer parameters than state-of-the-art methods. Furthermore, our method
can be applied to clothing texture transfer.
Related papers
- Lifting by Image -- Leveraging Image Cues for Accurate 3D Human Pose
Estimation [10.374944534302234]
"lifting from 2D pose" method has been the dominant approach to 3D Human Pose Estimation (3DHPE)
Rich semantic and texture information in images can contribute to a more accurate "lifting" procedure.
In this paper, we give new insight into the cause of poor generalization problems and the effectiveness of image features.
arXiv Detail & Related papers (2023-12-25T07:50:58Z) - Pose Guided Human Image Synthesis with Partially Decoupled GAN [25.800174118151638]
Pose Guided Human Image Synthesis (PGHIS) is a challenging task of transforming a human image from the reference pose to a target pose.
We propose a method by decoupling the human body into several parts to guide the synthesis of a realistic image of the person.
In addition, we design a multi-head attention-based module for PGHIS.
arXiv Detail & Related papers (2022-10-07T15:31:37Z) - Neural Human Deformation Transfer [26.60034186410921]
We consider the problem of human deformation transfer, where the goal is to retarget poses between different characters.
We take a different approach and transform the identity of a character into a new identity without modifying the character's pose.
We show experimentally that our method outperforms state-of-the-art methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2021-09-03T15:51:30Z) - Controllable Person Image Synthesis with Spatially-Adaptive Warped
Normalization [72.65828901909708]
Controllable person image generation aims to produce realistic human images with desirable attributes.
We introduce a novel Spatially-Adaptive Warped Normalization (SAWN), which integrates a learned flow-field to warp modulation parameters.
We propose a novel self-training part replacement strategy to refine the pretrained model for the texture-transfer task.
arXiv Detail & Related papers (2021-05-31T07:07:44Z) - Learning Semantic Person Image Generation by Region-Adaptive
Normalization [81.52223606284443]
We propose a new two-stage framework to handle the pose and appearance translation.
In the first stage, we predict the target semantic parsing maps to eliminate the difficulties of pose transfer.
In the second stage, we suggest a new person image generation method by incorporating the region-adaptive normalization.
arXiv Detail & Related papers (2021-04-14T06:51:37Z) - Progressive and Aligned Pose Attention Transfer for Person Image
Generation [59.87492938953545]
This paper proposes a new generative adversarial network for pose transfer, i.e., transferring the pose of a given person to a target pose.
We use two types of blocks, namely Pose-Attentional Transfer Block (PATB) and Aligned Pose-Attentional Transfer Bloc (APATB)
We verify the efficacy of the model on the Market-1501 and DeepFashion datasets, using quantitative and qualitative measures.
arXiv Detail & Related papers (2021-03-22T07:24:57Z) - PISE: Person Image Synthesis and Editing with Decoupled GAN [64.70360318367943]
We propose PISE, a novel two-stage generative model for Person Image Synthesis and Editing.
For human pose transfer, we first synthesize a human parsing map aligned with the target pose to represent the shape of clothing.
To decouple the shape and style of clothing, we propose joint global and local per-region encoding and normalization.
arXiv Detail & Related papers (2021-03-06T04:32:06Z) - PoNA: Pose-guided Non-local Attention for Human Pose Transfer [105.14398322129024]
We propose a new human pose transfer method using a generative adversarial network (GAN) with simplified cascaded blocks.
Our model generates sharper and more realistic images with rich details, while having fewer parameters and faster speed.
arXiv Detail & Related papers (2020-12-13T12:38:29Z) - Human Motion Transfer from Poses in the Wild [61.6016458288803]
We tackle the problem of human motion transfer, where we synthesize novel motion video for a target person that imitates the movement from a reference video.
It is a video-to-video translation task in which the estimated poses are used to bridge two domains.
We introduce a novel pose-to-video translation framework for generating high-quality videos that are temporally coherent even for in-the-wild pose sequences unseen during training.
arXiv Detail & Related papers (2020-04-07T05:59:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.