Hierarchical Neural Implicit Pose Network for Animation and Motion
Retargeting
- URL: http://arxiv.org/abs/2112.00958v1
- Date: Thu, 2 Dec 2021 03:25:46 GMT
- Title: Hierarchical Neural Implicit Pose Network for Animation and Motion
Retargeting
- Authors: Sourav Biswas, Kangxue Yin, Maria Shugrina, Sanja Fidler, Sameh Khamis
- Abstract summary: HIPNet is a neural implicit pose network trained on multiple subjects across many poses.
We employ a hierarchical skeleton-based representation to learn a signed distance function on a canonical unposed space.
We achieve state-of-the-art results on various single-subject and multi-subject benchmarks.
- Score: 66.69067601079706
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present HIPNet, a neural implicit pose network trained on multiple
subjects across many poses. HIPNet can disentangle subject-specific details
from pose-specific details, effectively enabling us to retarget motion from one
subject to another or to animate between keyframes through latent space
interpolation. To this end, we employ a hierarchical skeleton-based
representation to learn a signed distance function on a canonical unposed
space. This joint-based decomposition enables us to represent subtle details
that are local to the space around the body joint. Unlike previous neural
implicit method that requires ground-truth SDF for training, our model we only
need a posed skeleton and the point cloud for training, and we have no
dependency on a traditional parametric model or traditional skinning
approaches. We achieve state-of-the-art results on various single-subject and
multi-subject benchmarks.
Related papers
- Self Supervised Networks for Learning Latent Space Representations of Human Body Scans and Motions [6.165163123577484]
This paper introduces self-supervised neural network models to tackle several fundamental problems in the field of 3D human body analysis and processing.
We propose VariShaPE, a novel architecture for the retrieval of latent space representations of body shapes and poses.
Second, we complement the estimation of latent codes with MoGeN, a framework that learns the geometry on the latent space itself.
arXiv Detail & Related papers (2024-11-05T19:59:40Z) - Improving Video Violence Recognition with Human Interaction Learning on
3D Skeleton Point Clouds [88.87985219999764]
We develop a method for video violence recognition from a new perspective of skeleton points.
We first formulate 3D skeleton point clouds from human sequences extracted from videos.
We then perform interaction learning on these 3D skeleton point clouds.
arXiv Detail & Related papers (2023-08-26T12:55:18Z) - Pose Modulated Avatars from Video [22.395774558845336]
We develop a two-branch neural network that is adaptive and explicit in the frequency domain.
The first branch is a graph neural network that models correlations among body parts locally.
The second branch combines these correlation features to a set of global frequencies and then modulates the feature encoding.
arXiv Detail & Related papers (2023-08-23T06:49:07Z) - Pose-aware Attention Network for Flexible Motion Retargeting by Body
Part [17.637846838499737]
Motion is a fundamental problem in computer graphics and computer vision.
Existing approaches usually have many strict requirements.
We propose a novel, flexible motion framework.
arXiv Detail & Related papers (2023-06-13T08:49:29Z) - HandNeRF: Neural Radiance Fields for Animatable Interacting Hands [122.32855646927013]
We propose a novel framework to reconstruct accurate appearance and geometry with neural radiance fields (NeRF) for interacting hands.
We conduct extensive experiments to verify the merits of our proposed HandNeRF and report a series of state-of-the-art results.
arXiv Detail & Related papers (2023-03-24T06:19:19Z) - Neural Rendering of Humans in Novel View and Pose from Monocular Video [68.37767099240236]
We introduce a new method that generates photo-realistic humans under novel views and poses given a monocular video as input.
Our method significantly outperforms existing approaches under unseen poses and novel views given monocular videos as input.
arXiv Detail & Related papers (2022-04-04T03:09:20Z) - Learning Skeletal Articulations with Neural Blend Shapes [57.879030623284216]
We develop a neural technique for articulating 3D characters using enveloping with a pre-defined skeletal structure.
Our framework learns to rig and skin characters with the same articulation structure.
We propose neural blend shapes which improve the deformation quality in the joint regions.
arXiv Detail & Related papers (2021-05-06T05:58:13Z) - Skeleton-Aware Networks for Deep Motion Retargeting [83.65593033474384]
We introduce a novel deep learning framework for data-driven motion between skeletons.
Our approach learns how to retarget without requiring any explicit pairing between the motions in the training set.
arXiv Detail & Related papers (2020-05-12T12:51:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.