Neural Human Deformation Transfer
- URL: http://arxiv.org/abs/2109.01588v1
- Date: Fri, 3 Sep 2021 15:51:30 GMT
- Title: Neural Human Deformation Transfer
- Authors: Jean Basset and Adnane Boukhayma and Stefanie Wuhrer and Franck Multon
and Edmond Boyer
- Abstract summary: We consider the problem of human deformation transfer, where the goal is to retarget poses between different characters.
We take a different approach and transform the identity of a character into a new identity without modifying the character's pose.
We show experimentally that our method outperforms state-of-the-art methods both quantitatively and qualitatively.
- Score: 26.60034186410921
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of human deformation transfer, where the goal is to
retarget poses between different characters. Traditional methods that tackle
this problem require a clear definition of the pose, and use this definition to
transfer poses between characters. In this work, we take a different approach
and transform the identity of a character into a new identity without modifying
the character's pose. This offers the advantage of not having to define
equivalences between 3D human poses, which is not straightforward as poses tend
to change depending on the identity of the character performing them, and as
their meaning is highly contextual. To achieve the deformation transfer, we
propose a neural encoder-decoder architecture where only identity information
is encoded and where the decoder is conditioned on the pose. We use pose
independent representations, such as isometry-invariant shape characteristics,
to represent identity features. Our model uses these features to supervise the
prediction of offsets from the deformed pose to the result of the transfer. We
show experimentally that our method outperforms state-of-the-art methods both
quantitatively and qualitatively, and generalises better to poses not seen
during training. We also introduce a fine-tuning step that allows to obtain
competitive results for extreme identities, and allows to transfer simple
clothing.
Related papers
- Neural Pose Representation Learning for Generating and Transferring Non-Rigid Object Poses [11.614034196935899]
We propose a novel method for learning representations of poses for 3D deformable objects.
It specializes in 1) disentangling pose information from the object's identity, 2) facilitating the learning of pose variations, and 3) transferring pose information to other object identities.
Based on these properties, our method enables the generation of 3D deformable objects with diversity in both identities and poses.
arXiv Detail & Related papers (2024-06-14T05:33:01Z) - Disentangling Identity and Pose for Facial Expression Recognition [54.50747989860957]
We propose an identity and pose disentangled facial expression recognition (IPD-FER) model to learn more discriminative feature representation.
For identity encoder, a well pre-trained face recognition model is utilized and fixed during training, which alleviates the restriction on specific expression training data.
By comparing the difference between synthesized neutral and expressional images of the same individual, the expression component is further disentangled from identity and pose.
arXiv Detail & Related papers (2022-08-17T06:48:13Z) - Skeleton-free Pose Transfer for Stylized 3D Characters [53.33996932633865]
We present the first method that automatically transfers poses between stylized 3D characters without skeletal rigging.
We propose a novel pose transfer network that predicts the character skinning weights and deformation transformations jointly to articulate the target character to match the desired pose.
Our method is trained in a semi-supervised manner absorbing all existing character data with paired/unpaired poses and stylized shapes.
arXiv Detail & Related papers (2022-07-28T20:05:57Z) - Pose-driven Attention-guided Image Generation for Person
Re-Identification [39.605062525247135]
We propose an end-to-end pose-driven generative adversarial network to generate multiple poses of a person.
A semantic-consistency loss is proposed to preserve the semantic information of the person during pose transfer.
We show that by incorporating the proposed approach in a person re-identification framework, realistic pose transferred images and state-of-the-art re-identification results can be achieved.
arXiv Detail & Related papers (2021-04-28T14:02:24Z) - Pose Invariant Person Re-Identification using Robust Pose-transformation
GAN [11.338815177557645]
Person re-identification (re-ID) aims to retrieve a person's images from an image gallery, given a single instance of the person of interest.
Despite several advancements, learning discriminative identity-sensitive and viewpoint invariant features for robust Person Re-identification is a major challenge owing to large pose variation of humans.
This paper proposes a re-ID pipeline that utilizes the image generation capability of Generative Adversarial Networks combined with pose regression and feature fusion to achieve pose invariant feature learning.
arXiv Detail & Related papers (2021-04-11T15:47:03Z) - Progressive and Aligned Pose Attention Transfer for Person Image
Generation [59.87492938953545]
This paper proposes a new generative adversarial network for pose transfer, i.e., transferring the pose of a given person to a target pose.
We use two types of blocks, namely Pose-Attentional Transfer Block (PATB) and Aligned Pose-Attentional Transfer Bloc (APATB)
We verify the efficacy of the model on the Market-1501 and DeepFashion datasets, using quantitative and qualitative measures.
arXiv Detail & Related papers (2021-03-22T07:24:57Z) - PoNA: Pose-guided Non-local Attention for Human Pose Transfer [105.14398322129024]
We propose a new human pose transfer method using a generative adversarial network (GAN) with simplified cascaded blocks.
Our model generates sharper and more realistic images with rich details, while having fewer parameters and faster speed.
arXiv Detail & Related papers (2020-12-13T12:38:29Z) - Human Pose Transfer by Adaptive Hierarchical Deformation [24.70009597455219]
We propose an adaptive human pose transfer network with two hierarchical deformation levels.
The first level generates human semantic parsing aligned with the target pose.
The second level generates the final textured person image in the target pose with the semantic guidance.
arXiv Detail & Related papers (2020-12-13T01:49:26Z) - Pose-Guided Human Animation from a Single Image in the Wild [83.86903892201656]
We present a new pose transfer method for synthesizing a human animation from a single image of a person controlled by a sequence of body poses.
Existing pose transfer methods exhibit significant visual artifacts when applying to a novel scene.
We design a compositional neural network that predicts the silhouette, garment labels, and textures.
We are able to synthesize human animations that can preserve the identity and appearance of the person in a temporally coherent way without any fine-tuning of the network on the testing scene.
arXiv Detail & Related papers (2020-12-07T15:38:29Z) - Neural Pose Transfer by Spatially Adaptive Instance Normalization [73.04483812364127]
We propose the first neural pose transfer model that solves the pose transfer via the latest technique for image style transfer.
Our model does not require any correspondences between the source and target meshes.
Experiments show that the proposed model can effectively transfer deformation from source to target meshes, and has good generalization ability to deal with unseen identities or poses of meshes.
arXiv Detail & Related papers (2020-03-16T14:33:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.