AutoAvatar: Autoregressive Neural Fields for Dynamic Avatar Modeling
- URL: http://arxiv.org/abs/2203.13817v1
- Date: Fri, 25 Mar 2022 17:59:59 GMT
- Title: AutoAvatar: Autoregressive Neural Fields for Dynamic Avatar Modeling
- Authors: Ziqian Bai, Timur Bagautdinov, Javier Romero, Michael Zollh\"ofer,
Ping Tan, Shunsuke Saito
- Abstract summary: We exploit autoregressive modeling to capture dynamic effects, such as soft-tissue deformations.
We introduce the notion of articulated observer points, which relate implicit states to the explicit surface of a parametric human body model.
Our approach outperforms the state of the art, achieving plausible dynamic deformations even for unseen motions.
- Score: 38.9663410820652
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural fields such as implicit surfaces have recently enabled avatar modeling
from raw scans without explicit temporal correspondences. In this work, we
exploit autoregressive modeling to further extend this notion to capture
dynamic effects, such as soft-tissue deformations. Although autoregressive
models are naturally capable of handling dynamics, it is non-trivial to apply
them to implicit representations, as explicit state decoding is infeasible due
to prohibitive memory requirements. In this work, for the first time, we enable
autoregressive modeling of implicit avatars. To reduce the memory bottleneck
and efficiently model dynamic implicit surfaces, we introduce the notion of
articulated observer points, which relate implicit states to the explicit
surface of a parametric human body model. We demonstrate that encoding implicit
surfaces as a set of height fields defined on articulated observer points leads
to significantly better generalization compared to a latent representation. The
experiments show that our approach outperforms the state of the art, achieving
plausible dynamic deformations even for unseen motions.
https://zqbai-jeremy.github.io/autoavatar
Related papers
- DENSER: 3D Gaussians Splatting for Scene Reconstruction of Dynamic Urban Environments [0.0]
We propose DENSER, a framework that significantly enhances the representation of dynamic objects.
The proposed approach significantly outperforms state-of-the-art methods by a wide margin.
arXiv Detail & Related papers (2024-09-16T07:11:58Z) - Degrees of Freedom Matter: Inferring Dynamics from Point Trajectories [28.701879490459675]
We aim to learn an implicit motion field parameterized by a neural network to predict the movement of novel points within same domain.
We exploit intrinsic regularization provided by SIREN, and modify the input layer to produce atemporally smooth motion field.
Our experiments assess the model's performance in predicting unseen point trajectories and its application in temporal mesh alignment with deformation.
arXiv Detail & Related papers (2024-06-05T21:02:10Z) - EmerNeRF: Emergent Spatial-Temporal Scene Decomposition via
Self-Supervision [85.17951804790515]
EmerNeRF is a simple yet powerful approach for learning spatial-temporal representations of dynamic driving scenes.
It simultaneously captures scene geometry, appearance, motion, and semantics via self-bootstrapping.
Our method achieves state-of-the-art performance in sensor simulation.
arXiv Detail & Related papers (2023-11-03T17:59:55Z) - Dynamic Point Fields [30.029872787758705]
We present a dynamic point field model that combines the representational benefits of explicit point-based graphics with implicit deformation networks.
We show the advantages of our dynamic point field framework in terms of its representational power, learning efficiency, and robustness to out-of-distribution novel poses.
arXiv Detail & Related papers (2023-04-05T17:52:37Z) - Neural Abstractions [72.42530499990028]
We present a novel method for the safety verification of nonlinear dynamical models that uses neural networks to represent abstractions of their dynamics.
We demonstrate that our approach performs comparably to the mature tool Flow* on existing benchmark nonlinear models.
arXiv Detail & Related papers (2023-01-27T12:38:09Z) - Drivable Volumetric Avatars using Texel-Aligned Features [52.89305658071045]
Photo telepresence requires both high-fidelity body modeling and faithful driving to enable dynamically synthesized appearance.
We propose an end-to-end framework that addresses two core challenges in modeling and driving full-body avatars of real people.
arXiv Detail & Related papers (2022-07-20T09:28:16Z) - Animatable Implicit Neural Representations for Creating Realistic
Avatars from Videos [63.16888987770885]
This paper addresses the challenge of reconstructing an animatable human model from a multi-view video.
We introduce a pose-driven deformation field based on the linear blend skinning algorithm.
We show that our approach significantly outperforms recent human modeling methods.
arXiv Detail & Related papers (2022-03-15T17:56:59Z) - ACID: Action-Conditional Implicit Visual Dynamics for Deformable Object
Manipulation [135.10594078615952]
We introduce ACID, an action-conditional visual dynamics model for volumetric deformable objects.
A benchmark contains over 17,000 action trajectories with six types of plush toys and 78 variants.
Our model achieves the best performance in geometry, correspondence, and dynamics predictions.
arXiv Detail & Related papers (2022-03-14T04:56:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.