A 3D Mesh-based Lifting-and-Projection Network for Human Pose Transfer
- URL: http://arxiv.org/abs/2109.11719v1
- Date: Fri, 24 Sep 2021 03:03:02 GMT
- Title: A 3D Mesh-based Lifting-and-Projection Network for Human Pose Transfer
- Authors: Jinxiang Liu, Yangheng Zhao, Siheng Chen and Ya Zhang
- Abstract summary: We propose a lifting-and-projection framework to perform pose transfer in the 3D mesh space.
To leverage the human body shape prior, LPNet exploits the topological information of the body mesh.
To preserve texture details, ADCNet is introduced to enhance the feature produced by LPNet with the source foreground image.
- Score: 25.681557081096805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human pose transfer has typically been modeled as a 2D image-to-image
translation problem. This formulation ignores the human body shape prior in 3D
space and inevitably causes implausible artifacts, especially when facing
occlusion. To address this issue, we propose a lifting-and-projection framework
to perform pose transfer in the 3D mesh space. The core of our framework is a
foreground generation module, that consists of two novel networks: a
lifting-and-projection network (LPNet) and an appearance detail compensating
network (ADCNet). To leverage the human body shape prior, LPNet exploits the
topological information of the body mesh to learn an expressive visual
representation for the target person in the 3D mesh space. To preserve texture
details, ADCNet is further introduced to enhance the feature produced by LPNet
with the source foreground image. Such design of the foreground generation
module enables the model to better handle difficult cases such as those with
occlusions. Experiments on the iPER and Fashion datasets empirically
demonstrate that the proposed lifting-and-projection framework is effective and
outperforms the existing image-to-image-based and mesh-based methods on human
pose transfer task in both self-transfer and cross-transfer settings.
Related papers
- Unsupervised 3D Pose Transfer with Cross Consistency and Dual
Reconstruction [50.94171353583328]
The goal of 3D pose transfer is to transfer the pose from the source mesh to the target mesh while preserving the identity information.
Deep learning-based methods improved the efficiency and performance of 3D pose transfer.
We present X-DualNet, a simple yet effective approach that enables unsupervised 3D pose transfer.
arXiv Detail & Related papers (2022-11-18T15:09:56Z) - NeuralReshaper: Single-image Human-body Retouching with Deep Neural
Networks [50.40798258968408]
We present NeuralReshaper, a novel method for semantic reshaping of human bodies in single images using deep generative networks.
Our approach follows a fit-then-reshape pipeline, which first fits a parametric 3D human model to a source human image.
To deal with the lack-of-data problem that no paired data exist, we introduce a novel self-supervised strategy to train our network.
arXiv Detail & Related papers (2022-03-20T09:02:13Z) - 3D Pose Transfer with Correspondence Learning and Mesh Refinement [41.92922228475176]
3D pose transfer is one of the most challenging 3D generation tasks.
We propose a correspondence-refinement network to help the 3D pose transfer for both human and animal meshes.
arXiv Detail & Related papers (2021-09-30T11:49:03Z) - OSTeC: One-Shot Texture Completion [86.23018402732748]
We propose an unsupervised approach for one-shot 3D facial texture completion.
The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator.
We frontalize the target image by projecting the completed texture into the generator.
arXiv Detail & Related papers (2020-12-30T23:53:26Z) - Pose-Guided High-Resolution Appearance Transfer via Progressive Training [65.92031716146865]
We propose a pose-guided appearance transfer network for transferring a given reference appearance to a target pose in unprecedented image resolution.
Our network utilizes dense local descriptors including local perceptual loss and local discriminators to refine details.
Our model produces high-quality images, which can be further utilized in useful applications such as garment transfer between people.
arXiv Detail & Related papers (2020-08-27T03:18:44Z) - SMPLpix: Neural Avatars from 3D Human Models [56.85115800735619]
We bridge the gap between classic rendering and the latest generative networks operating in pixel space.
We train a network that directly converts a sparse set of 3D mesh vertices into photorealistic images.
We show the advantage over conventional differentiables both in terms of the level of photorealism and rendering efficiency.
arXiv Detail & Related papers (2020-08-16T10:22:00Z) - Learning 3D Human Shape and Pose from Dense Body Parts [117.46290013548533]
We propose a Decompose-and-aggregate Network (DaNet) to learn 3D human shape and pose from dense correspondences of body parts.
Messages from local streams are aggregated to enhance the robust prediction of the rotation-based poses.
Our method is validated on both indoor and real-world datasets including Human3.6M, UP3D, COCO, and 3DPW.
arXiv Detail & Related papers (2019-12-31T15:09:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.