Dynamic Point Fields
- URL: http://arxiv.org/abs/2304.02626v2
- Date: Thu, 6 Apr 2023 06:13:52 GMT
- Title: Dynamic Point Fields
- Authors: Sergey Prokudin, Qianli Ma, Maxime Raafat, Julien Valentin, Siyu Tang
- Abstract summary: We present a dynamic point field model that combines the representational benefits of explicit point-based graphics with implicit deformation networks.
We show the advantages of our dynamic point field framework in terms of its representational power, learning efficiency, and robustness to out-of-distribution novel poses.
- Score: 30.029872787758705
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent years have witnessed significant progress in the field of neural
surface reconstruction. While the extensive focus was put on volumetric and
implicit approaches, a number of works have shown that explicit graphics
primitives such as point clouds can significantly reduce computational
complexity, without sacrificing the reconstructed surface quality. However,
less emphasis has been put on modeling dynamic surfaces with point primitives.
In this work, we present a dynamic point field model that combines the
representational benefits of explicit point-based graphics with implicit
deformation networks to allow efficient modeling of non-rigid 3D surfaces.
Using explicit surface primitives also allows us to easily incorporate
well-established constraints such as-isometric-as-possible regularisation.
While learning this deformation model is prone to local optima when trained in
a fully unsupervised manner, we propose to additionally leverage semantic
information such as keypoint dynamics to guide the deformation learning. We
demonstrate our model with an example application of creating an expressive
animatable human avatar from a collection of 3D scans. Here, previous methods
mostly rely on variants of the linear blend skinning paradigm, which
fundamentally limits the expressivity of such models when dealing with complex
cloth appearances such as long skirts. We show the advantages of our dynamic
point field framework in terms of its representational power, learning
efficiency, and robustness to out-of-distribution novel poses.
Related papers
- DreamPolish: Domain Score Distillation With Progressive Geometry Generation [66.94803919328815]
We introduce DreamPolish, a text-to-3D generation model that excels in producing refined geometry and high-quality textures.
In the geometry construction phase, our approach leverages multiple neural representations to enhance the stability of the synthesis process.
In the texture generation phase, we introduce a novel score distillation objective, namely domain score distillation (DSD), to guide neural representations toward such a domain.
arXiv Detail & Related papers (2024-11-03T15:15:01Z) - MonST3R: A Simple Approach for Estimating Geometry in the Presence of Motion [118.74385965694694]
We present Motion DUSt3R (MonST3R), a novel geometry-first approach that directly estimates per-timestep geometry from dynamic scenes.
By simply estimating a pointmap for each timestep, we can effectively adapt DUST3R's representation, previously only used for static scenes, to dynamic scenes.
We show that by posing the problem as a fine-tuning task, identifying several suitable datasets, and strategically training the model on this limited data, we can surprisingly enable the model to handle dynamics.
arXiv Detail & Related papers (2024-10-04T18:00:07Z) - VortSDF: 3D Modeling with Centroidal Voronoi Tesselation on Signed Distance Field [5.573454319150408]
We introduce a volumetric optimization framework that combines explicit SDF fields with a shallow color network, in order to estimate 3D shape properties over tetrahedral grids.
Experimental results with Chamfer statistics validate this approach with unprecedented reconstruction quality on various scenarios such as objects, open scenes or human.
arXiv Detail & Related papers (2024-07-29T09:46:39Z) - Object Dynamics Modeling with Hierarchical Point Cloud-based Representations [1.3934784414106087]
We propose a novel U-net architecture based on continuous point convolution which embeds information from 3D coordinates.
Bottleneck layers in the downsampled point clouds lead to better long-range interaction modeling.
Our approach significantly improves the state-of-the-art, especially in scenarios that require accurate gravity or collision reasoning.
arXiv Detail & Related papers (2024-04-09T06:10:15Z) - DynoSurf: Neural Deformation-based Temporally Consistent Dynamic Surface Reconstruction [93.18586302123633]
This paper explores the problem of reconstructing temporally consistent surfaces from a 3D point cloud sequence without correspondence.
We propose DynoSurf, an unsupervised learning framework integrating a template surface representation with a learnable deformation field.
Experimental results demonstrate the significant superiority of DynoSurf over current state-of-the-art approaches.
arXiv Detail & Related papers (2024-03-18T08:58:48Z) - Neural Point-based Volumetric Avatar: Surface-guided Neural Points for
Efficient and Photorealistic Volumetric Head Avatar [62.87222308616711]
We propose fullname (name), a method that adopts the neural point representation and the neural volume rendering process.
Specifically, the neural points are strategically constrained around the surface of the target expression via a high-resolution UV displacement map.
By design, our name is better equipped to handle topologically changing regions and thin structures while also ensuring accurate expression control when animating avatars.
arXiv Detail & Related papers (2023-07-11T03:40:10Z) - Animatable Implicit Neural Representations for Creating Realistic
Avatars from Videos [63.16888987770885]
This paper addresses the challenge of reconstructing an animatable human model from a multi-view video.
We introduce a pose-driven deformation field based on the linear blend skinning algorithm.
We show that our approach significantly outperforms recent human modeling methods.
arXiv Detail & Related papers (2022-03-15T17:56:59Z) - Identity-Disentangled Neural Deformation Model for Dynamic Meshes [8.826835863410109]
We learn a neural deformation model that disentangles identity-induced shape variations from pose-dependent deformations using implicit neural functions.
We propose two methods to integrate global pose alignment with our neural deformation model.
Our method also outperforms traditional skeleton-driven models in reconstructing surface details such as palm prints or tendons without limitations from a fixed template.
arXiv Detail & Related papers (2021-09-30T17:43:06Z) - Learning Visible Connectivity Dynamics for Cloth Smoothing [17.24004979796887]
We propose to learn a particle-based dynamics model from a partial point cloud observation.
To overcome the challenges of partial observability, we infer which visible points are connected on the underlying cloth mesh.
We show that our method greatly outperforms previous state-of-the-art model-based and model-free reinforcement learning methods in simulation.
arXiv Detail & Related papers (2021-05-21T15:03:29Z) - SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local
Elements [62.652588951757764]
Learning to model and reconstruct humans in clothing is challenging due to articulation, non-rigid deformation, and varying clothing types and topologies.
Recent work uses neural networks to parameterize local surface elements.
We present three key innovations: First, we deform surface elements based on a human body model.
Second, we address the limitations of existing neural surface elements by regressing local geometry from local features.
arXiv Detail & Related papers (2021-04-15T17:59:39Z) - SoftSMPL: Data-driven Modeling of Nonlinear Soft-tissue Dynamics for
Parametric Humans [15.83525220631304]
We present SoftSMPL, a learning-based method to model realistic soft-tissue dynamics as a function of body shape and motion.
At the core of our method there are three key contributions that enable us to model highly realistic dynamics.
arXiv Detail & Related papers (2020-04-01T10:35:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.