A Deep Emulator for Secondary Motion of 3D Characters
- URL: http://arxiv.org/abs/2103.01261v2
- Date: Wed, 3 Mar 2021 06:35:43 GMT
- Title: A Deep Emulator for Secondary Motion of 3D Characters
- Authors: Mianlun Zheng, Yi Zhou, Duygu Ceylan, Jernej Barbic
- Abstract summary: We present a learning-based approach to enhance skinning-based animations of 3D characters with vivid secondary motion effects.
We design a neural network that encodes each local patch of a character simulation mesh.
Being a local method, our network generalizes to arbitrarily shaped 3D character meshes at test time.
- Score: 24.308088194689415
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fast and light-weight methods for animating 3D characters are desirable in
various applications such as computer games. We present a learning-based
approach to enhance skinning-based animations of 3D characters with vivid
secondary motion effects. We design a neural network that encodes each local
patch of a character simulation mesh where the edges implicitly encode the
internal forces between the neighboring vertices. The network emulates the
ordinary differential equations of the character dynamics, predicting new
vertex positions from the current accelerations, velocities and positions.
Being a local method, our network is independent of the mesh topology and
generalizes to arbitrarily shaped 3D character meshes at test time. We further
represent per-vertex constraints and material properties such as stiffness,
enabling us to easily adjust the dynamics in different parts of the mesh. We
evaluate our method on various character meshes and complex motion sequences.
Our method can be over 30 times more efficient than ground-truth physically
based simulation, and outperforms alternative solutions that provide fast
approximations.
Related papers
- DreamMesh4D: Video-to-4D Generation with Sparse-Controlled Gaussian-Mesh Hybrid Representation [10.250715657201363]
We introduce DreamMesh4D, a novel framework combining mesh representation with geometric skinning technique to generate high-quality 4D object from a monocular video.
Our method is compatible with modern graphic pipelines, showcasing its potential in the 3D gaming and film industry.
arXiv Detail & Related papers (2024-10-09T10:41:08Z) - FLARE: Fast Learning of Animatable and Relightable Mesh Avatars [64.48254296523977]
Our goal is to efficiently learn personalized animatable 3D head avatars from videos that are geometrically accurate, realistic, relightable, and compatible with current rendering systems.
We introduce FLARE, a technique that enables the creation of animatable and relightable avatars from a single monocular video.
arXiv Detail & Related papers (2023-10-26T16:13:00Z) - Differentiable Blocks World: Qualitative 3D Decomposition by Rendering
Primitives [70.32817882783608]
We present an approach that produces a simple, compact, and actionable 3D world representation by means of 3D primitives.
Unlike existing primitive decomposition methods that rely on 3D input data, our approach operates directly on images.
We show that the resulting textured primitives faithfully reconstruct the input images and accurately model the visible 3D points.
arXiv Detail & Related papers (2023-07-11T17:58:31Z) - Fast-SNARF: A Fast Deformer for Articulated Neural Fields [92.68788512596254]
We propose a new articulation module for neural fields, Fast-SNARF, which finds accurate correspondences between canonical space and posed space.
Fast-SNARF is a drop-in replacement in to our previous work, SNARF, while significantly improving its computational efficiency.
Because learning of deformation maps is a crucial component in many 3D human avatar methods, we believe that this work represents a significant step towards the practical creation of 3D virtual humans.
arXiv Detail & Related papers (2022-11-28T17:55:34Z) - NeuPhysics: Editable Neural Geometry and Physics from Monocular Videos [82.74918564737591]
We present a method for learning 3D geometry and physics parameters of a dynamic scene from only a monocular RGB video input.
Experiments show that our method achieves superior mesh and video reconstruction of dynamic scenes compared to competing Neural Field approaches.
arXiv Detail & Related papers (2022-10-22T04:57:55Z) - N-Cloth: Predicting 3D Cloth Deformation with Mesh-Based Networks [69.94313958962165]
We present a novel mesh-based learning approach (N-Cloth) for plausible 3D cloth deformation prediction.
We use graph convolution to transform the cloth and object meshes into a latent space to reduce the non-linearity in the mesh space.
Our approach can handle complex cloth meshes with up to $100$K triangles and scenes with various objects corresponding to SMPL humans, Non-SMPL humans, or rigid bodies.
arXiv Detail & Related papers (2021-12-13T03:13:11Z) - Neural Monocular 3D Human Motion Capture with Physical Awareness [76.55971509794598]
We present a new trainable system for physically plausible markerless 3D human motion capture.
Unlike most neural methods for human motion capture, our approach is aware of physical and environmental constraints.
It produces smooth and physically principled 3D motions in an interactive frame rate in a wide variety of challenging scenes.
arXiv Detail & Related papers (2021-05-03T17:57:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.