H4D: Human 4D Modeling by Learning Neural Compositional Representation
- URL: http://arxiv.org/abs/2203.01247v1
- Date: Wed, 2 Mar 2022 17:10:49 GMT
- Title: H4D: Human 4D Modeling by Learning Neural Compositional Representation
- Authors: Boyan Jiang, Yinda Zhang, Xingkui Wei, Xiangyang Xue, Yanwei Fu
- Abstract summary: This work presents a novel framework that can effectively learn a compact and compositional representation for dynamic human.
A simple yet effective linear motion model is proposed to provide a rough and regularized motion estimation.
Experiments demonstrate our method is not only efficacy in recovering dynamic human with accurate motion and detailed geometry, but also amenable to various 4D human related tasks.
- Score: 75.34798886466311
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the impressive results achieved by deep learning based 3D
reconstruction, the techniques of directly learning to model the 4D human
captures with detailed geometry have been less studied. This work presents a
novel framework that can effectively learn a compact and compositional
representation for dynamic human by exploiting the human body prior from the
widely-used SMPL parametric model. Particularly, our representation, named H4D,
represents dynamic 3D human over a temporal span into the latent spaces
encoding shape, initial pose, motion and auxiliary information. A simple yet
effective linear motion model is proposed to provide a rough and regularized
motion estimation, followed by per-frame compensation for pose and geometry
details with the residual encoded in the auxiliary code. Technically, we
introduce novel GRU-based architectures to facilitate learning and improve the
representation capability. Extensive experiments demonstrate our method is not
only efficacy in recovering dynamic human with accurate motion and detailed
geometry, but also amenable to various 4D human related tasks, including motion
retargeting, motion completion and future prediction.
Related papers
- Diffusion4D: Fast Spatial-temporal Consistent 4D Generation via Video Diffusion Models [116.31344506738816]
We present a novel framework, textbfDiffusion4D, for efficient and scalable 4D content generation.
We develop a 4D-aware video diffusion model capable of synthesizing orbital views of dynamic 3D assets.
Our method surpasses prior state-of-the-art techniques in terms of generation efficiency and 4D geometry consistency.
arXiv Detail & Related papers (2024-05-26T17:47:34Z) - PGAHum: Prior-Guided Geometry and Appearance Learning for High-Fidelity Animatable Human Reconstruction [9.231326291897817]
We introduce PGAHum, a prior-guided geometry and appearance learning framework for high-fidelity animatable human reconstruction.
We thoroughly exploit 3D human priors in three key modules of PGAHum to achieve high-quality geometry reconstruction with intricate details and photorealistic view synthesis on unseen poses.
arXiv Detail & Related papers (2024-04-22T04:22:30Z) - SurMo: Surface-based 4D Motion Modeling for Dynamic Human Rendering [45.51684124904457]
We propose a new 4D motion paradigm, SurMo, that models the temporal dynamics and human appearances in a unified framework.
Surface-based motion encoding that models 4D human motions with an efficient compact surface-based triplane.
Physical motion decoding that is designed to encourage physical motion learning.
4D appearance modeling that renders the motion triplanes into images by an efficient surface-conditioned decoding.
arXiv Detail & Related papers (2024-04-01T16:34:27Z) - Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance [25.346255905155424]
We introduce a methodology for human image animation by leveraging a 3D human parametric model within a latent diffusion framework.
By representing the 3D human parametric model as the motion guidance, we can perform parametric shape alignment of the human body between the reference image and the source video motion.
Our approach also exhibits superior generalization capabilities on the proposed in-the-wild dataset.
arXiv Detail & Related papers (2024-03-21T18:52:58Z) - Unsupervised 3D Pose Estimation with Non-Rigid Structure-from-Motion
Modeling [83.76377808476039]
We propose a new modeling method for human pose deformations and design an accompanying diffusion-based motion prior.
Inspired by the field of non-rigid structure-from-motion, we divide the task of reconstructing 3D human skeletons in motion into the estimation of a 3D reference skeleton.
A mixed spatial-temporal NRSfMformer is used to simultaneously estimate the 3D reference skeleton and the skeleton deformation of each frame from 2D observations sequence.
arXiv Detail & Related papers (2023-08-18T16:41:57Z) - LoRD: Local 4D Implicit Representation for High-Fidelity Dynamic Human
Modeling [69.56581851211841]
We propose a novel Local 4D implicit Representation for Dynamic clothed human, named LoRD.
Our key insight is to encourage the network to learn the latent codes of local part-level representation.
LoRD has strong capability for representing 4D human, and outperforms state-of-the-art methods on practical applications.
arXiv Detail & Related papers (2022-08-18T03:49:44Z) - LatentHuman: Shape-and-Pose Disentangled Latent Representation for Human
Bodies [78.17425779503047]
We propose a novel neural implicit representation for the human body.
It is fully differentiable and optimizable with disentangled shape and pose latent spaces.
Our model can be trained and fine-tuned directly on non-watertight raw data with well-designed losses.
arXiv Detail & Related papers (2021-11-30T04:10:57Z) - Learning Compositional Representation for 4D Captures with Neural ODE [72.56606274691033]
We introduce a compositional representation for 4D captures, that disentangles shape, initial state, and motion respectively.
To model the motion, a neural Ordinary Differential Equation (ODE) is trained to update the initial state conditioned on the learned motion code.
A decoder takes the shape code and the updated pose code to reconstruct 4D captures at each time stamp.
arXiv Detail & Related papers (2021-03-15T10:55:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.