Pose Modulated Avatars from Video
- URL: http://arxiv.org/abs/2308.11951v3
- Date: Fri, 29 Sep 2023 15:03:09 GMT
- Title: Pose Modulated Avatars from Video
- Authors: Chunjin Song, Bastian Wandt, Helge Rhodin
- Abstract summary: We develop a two-branch neural network that is adaptive and explicit in the frequency domain.
The first branch is a graph neural network that models correlations among body parts locally.
The second branch combines these correlation features to a set of global frequencies and then modulates the feature encoding.
- Score: 22.395774558845336
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is now possible to reconstruct dynamic human motion and shape from a
sparse set of cameras using Neural Radiance Fields (NeRF) driven by an
underlying skeleton. However, a challenge remains to model the deformation of
cloth and skin in relation to skeleton pose. Unlike existing avatar models that
are learned implicitly or rely on a proxy surface, our approach is motivated by
the observation that different poses necessitate unique frequency assignments.
Neglecting this distinction yields noisy artifacts in smooth areas or blurs
fine-grained texture and shape details in sharp regions. We develop a
two-branch neural network that is adaptive and explicit in the frequency
domain. The first branch is a graph neural network that models correlations
among body parts locally, taking skeleton pose as input. The second branch
combines these correlation features to a set of global frequencies and then
modulates the feature encoding. Our experiments demonstrate that our network
outperforms state-of-the-art methods in terms of preserving details and
generalization capabilities.
Related papers
- GUNet: A Graph Convolutional Network United Diffusion Model for Stable and Diversity Pose Generation [7.0646249774097525]
We propose a framework with GUNet as the main model, PoseDiffusion.
It is the first generative framework based on a diffusion model and also contains a series of variants fine-tuned based on a stable diffusion model.
Results show that PoseDiffusion outperforms existing SoTA algorithms in terms of stability and diversity of text-driven pose skeleton generation.
arXiv Detail & Related papers (2024-09-18T04:05:59Z) - Representing Animatable Avatar via Factorized Neural Fields [39.95141875395775]
Per-frame rendering results can be factorized into a pose-independent component and a corresponding pose-dependent equivalent to facilitate frame consistency.
We achieve a coherent preservation of both coarse body contours across the entire input video.
Our network surpasses the neural radiance fields (NeRF) based state-of-the-art methods in preserving high-frequency details and ensuring consistent body contours.
arXiv Detail & Related papers (2024-06-02T06:45:38Z) - DANBO: Disentangled Articulated Neural Body Representations via Graph
Neural Networks [12.132886846993108]
High-resolution models enable photo-realistic avatars but at the cost of requiring studio settings not available to end users.
Our goal is to create avatars directly from raw images without relying on expensive studio setups and surface tracking.
We introduce a three-stage method that induces two inductive biases to better disentangled pose-dependent deformation.
arXiv Detail & Related papers (2022-05-03T17:56:46Z) - Neural Rendering of Humans in Novel View and Pose from Monocular Video [68.37767099240236]
We introduce a new method that generates photo-realistic humans under novel views and poses given a monocular video as input.
Our method significantly outperforms existing approaches under unseen poses and novel views given monocular videos as input.
arXiv Detail & Related papers (2022-04-04T03:09:20Z) - Hierarchical Neural Implicit Pose Network for Animation and Motion
Retargeting [66.69067601079706]
HIPNet is a neural implicit pose network trained on multiple subjects across many poses.
We employ a hierarchical skeleton-based representation to learn a signed distance function on a canonical unposed space.
We achieve state-of-the-art results on various single-subject and multi-subject benchmarks.
arXiv Detail & Related papers (2021-12-02T03:25:46Z) - Neural Actor: Neural Free-view Synthesis of Human Actors with Pose
Control [80.79820002330457]
We propose a new method for high-quality synthesis of humans from arbitrary viewpoints and under arbitrary controllable poses.
Our method achieves better quality than the state-of-the-arts on playback as well as novel pose synthesis, and can even generalize well to new poses that starkly differ from the training poses.
arXiv Detail & Related papers (2021-06-03T17:40:48Z) - Learning Skeletal Articulations with Neural Blend Shapes [57.879030623284216]
We develop a neural technique for articulating 3D characters using enveloping with a pre-defined skeletal structure.
Our framework learns to rig and skin characters with the same articulation structure.
We propose neural blend shapes which improve the deformation quality in the joint regions.
arXiv Detail & Related papers (2021-05-06T05:58:13Z) - SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local
Elements [62.652588951757764]
Learning to model and reconstruct humans in clothing is challenging due to articulation, non-rigid deformation, and varying clothing types and topologies.
Recent work uses neural networks to parameterize local surface elements.
We present three key innovations: First, we deform surface elements based on a human body model.
Second, we address the limitations of existing neural surface elements by regressing local geometry from local features.
arXiv Detail & Related papers (2021-04-15T17:59:39Z) - Neural Re-Rendering of Humans from a Single Image [80.53438609047896]
We propose a new method for neural re-rendering of a human under a novel user-defined pose and viewpoint.
Our algorithm represents body pose and shape as a parametric mesh which can be reconstructed from a single image.
arXiv Detail & Related papers (2021-01-11T18:53:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.