Dynamic Point Fields
- URL: http://arxiv.org/abs/2304.02626v2
- Date: Thu, 6 Apr 2023 06:13:52 GMT
- Title: Dynamic Point Fields
- Authors: Sergey Prokudin, Qianli Ma, Maxime Raafat, Julien Valentin, Siyu Tang
- Abstract summary: We present a dynamic point field model that combines the representational benefits of explicit point-based graphics with implicit deformation networks.
We show the advantages of our dynamic point field framework in terms of its representational power, learning efficiency, and robustness to out-of-distribution novel poses.
- Score: 30.029872787758705
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent years have witnessed significant progress in the field of neural
surface reconstruction. While the extensive focus was put on volumetric and
implicit approaches, a number of works have shown that explicit graphics
primitives such as point clouds can significantly reduce computational
complexity, without sacrificing the reconstructed surface quality. However,
less emphasis has been put on modeling dynamic surfaces with point primitives.
In this work, we present a dynamic point field model that combines the
representational benefits of explicit point-based graphics with implicit
deformation networks to allow efficient modeling of non-rigid 3D surfaces.
Using explicit surface primitives also allows us to easily incorporate
well-established constraints such as-isometric-as-possible regularisation.
While learning this deformation model is prone to local optima when trained in
a fully unsupervised manner, we propose to additionally leverage semantic
information such as keypoint dynamics to guide the deformation learning. We
demonstrate our model with an example application of creating an expressive
animatable human avatar from a collection of 3D scans. Here, previous methods
mostly rely on variants of the linear blend skinning paradigm, which
fundamentally limits the expressivity of such models when dealing with complex
cloth appearances such as long skirts. We show the advantages of our dynamic
point field framework in terms of its representational power, learning
efficiency, and robustness to out-of-distribution novel poses.
Related papers
- Object Dynamics Modeling with Hierarchical Point Cloud-based Representations [1.3934784414106087]
We propose a novel U-net architecture based on continuous point convolution which embeds information from 3D coordinates.
Bottleneck layers in the downsampled point clouds lead to better long-range interaction modeling.
Our approach significantly improves the state-of-the-art, especially in scenarios that require accurate gravity or collision reasoning.
arXiv Detail & Related papers (2024-04-09T06:10:15Z) - DynoSurf: Neural Deformation-based Temporally Consistent Dynamic Surface Reconstruction [93.18586302123633]
This paper explores the problem of reconstructing temporally consistent surfaces from a 3D point cloud sequence without correspondence.
We propose DynoSurf, an unsupervised learning framework integrating a template surface representation with a learnable deformation field.
Experimental results demonstrate the significant superiority of DynoSurf over current state-of-the-art approaches.
arXiv Detail & Related papers (2024-03-18T08:58:48Z) - Neural Point-based Volumetric Avatar: Surface-guided Neural Points for
Efficient and Photorealistic Volumetric Head Avatar [62.87222308616711]
We propose fullname (name), a method that adopts the neural point representation and the neural volume rendering process.
Specifically, the neural points are strategically constrained around the surface of the target expression via a high-resolution UV displacement map.
By design, our name is better equipped to handle topologically changing regions and thin structures while also ensuring accurate expression control when animating avatars.
arXiv Detail & Related papers (2023-07-11T03:40:10Z) - Leveraging Equivariant Features for Absolute Pose Regression [9.30597356471664]
We show that a translation and rotation equivariant Convolutional Neural Network directly induces representations of camera motions into the feature space.
We then show that this geometric property allows for implicitly augmenting the training data under a whole group of image plane-preserving transformations.
arXiv Detail & Related papers (2022-04-05T12:44:20Z) - Animatable Implicit Neural Representations for Creating Realistic
Avatars from Videos [63.16888987770885]
This paper addresses the challenge of reconstructing an animatable human model from a multi-view video.
We introduce a pose-driven deformation field based on the linear blend skinning algorithm.
We show that our approach significantly outperforms recent human modeling methods.
arXiv Detail & Related papers (2022-03-15T17:56:59Z) - Identity-Disentangled Neural Deformation Model for Dynamic Meshes [8.826835863410109]
We learn a neural deformation model that disentangles identity-induced shape variations from pose-dependent deformations using implicit neural functions.
We propose two methods to integrate global pose alignment with our neural deformation model.
Our method also outperforms traditional skeleton-driven models in reconstructing surface details such as palm prints or tendons without limitations from a fixed template.
arXiv Detail & Related papers (2021-09-30T17:43:06Z) - Scene Synthesis via Uncertainty-Driven Attribute Synchronization [52.31834816911887]
This paper introduces a novel neural scene synthesis approach that can capture diverse feature patterns of 3D scenes.
Our method combines the strength of both neural network-based and conventional scene synthesis approaches.
arXiv Detail & Related papers (2021-08-30T19:45:07Z) - Neural Actor: Neural Free-view Synthesis of Human Actors with Pose
Control [80.79820002330457]
We propose a new method for high-quality synthesis of humans from arbitrary viewpoints and under arbitrary controllable poses.
Our method achieves better quality than the state-of-the-arts on playback as well as novel pose synthesis, and can even generalize well to new poses that starkly differ from the training poses.
arXiv Detail & Related papers (2021-06-03T17:40:48Z) - Learning Visible Connectivity Dynamics for Cloth Smoothing [17.24004979796887]
We propose to learn a particle-based dynamics model from a partial point cloud observation.
To overcome the challenges of partial observability, we infer which visible points are connected on the underlying cloth mesh.
We show that our method greatly outperforms previous state-of-the-art model-based and model-free reinforcement learning methods in simulation.
arXiv Detail & Related papers (2021-05-21T15:03:29Z) - SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local
Elements [62.652588951757764]
Learning to model and reconstruct humans in clothing is challenging due to articulation, non-rigid deformation, and varying clothing types and topologies.
Recent work uses neural networks to parameterize local surface elements.
We present three key innovations: First, we deform surface elements based on a human body model.
Second, we address the limitations of existing neural surface elements by regressing local geometry from local features.
arXiv Detail & Related papers (2021-04-15T17:59:39Z) - SoftSMPL: Data-driven Modeling of Nonlinear Soft-tissue Dynamics for
Parametric Humans [15.83525220631304]
We present SoftSMPL, a learning-based method to model realistic soft-tissue dynamics as a function of body shape and motion.
At the core of our method there are three key contributions that enable us to model highly realistic dynamics.
arXiv Detail & Related papers (2020-04-01T10:35:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.