DynoSurf: Neural Deformation-based Temporally Consistent Dynamic Surface Reconstruction
- URL: http://arxiv.org/abs/2403.11586v2
- Date: Mon, 22 Jul 2024 12:16:22 GMT
- Title: DynoSurf: Neural Deformation-based Temporally Consistent Dynamic Surface Reconstruction
- Authors: Yuxin Yao, Siyu Ren, Junhui Hou, Zhi Deng, Juyong Zhang, Wenping Wang,
- Abstract summary: This paper explores the problem of reconstructing temporally consistent surfaces from a 3D point cloud sequence without correspondence.
We propose DynoSurf, an unsupervised learning framework integrating a template surface representation with a learnable deformation field.
Experimental results demonstrate the significant superiority of DynoSurf over current state-of-the-art approaches.
- Score: 93.18586302123633
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper explores the problem of reconstructing temporally consistent surfaces from a 3D point cloud sequence without correspondence. To address this challenging task, we propose DynoSurf, an unsupervised learning framework integrating a template surface representation with a learnable deformation field. Specifically, we design a coarse-to-fine strategy for learning the template surface based on the deformable tetrahedron representation. Furthermore, we propose a learnable deformation representation based on the learnable control points and blending weights, which can deform the template surface non-rigidly while maintaining the consistency of the local shape. Experimental results demonstrate the significant superiority of DynoSurf over current state-of-the-art approaches, showcasing its potential as a powerful tool for dynamic mesh reconstruction. The code is publicly available at https://github.com/yaoyx689/DynoSurf.
Related papers
- DynamicSurf: Dynamic Neural RGB-D Surface Reconstruction with an
Optimizable Feature Grid [7.702806654565181]
DynamicSurf is a model-free neural implicit surface reconstruction method for high-fidelity 3D modelling of non-rigid surfaces from monocular RGB-D video.
We learn a neural deformation field that maps a canonical representation of the surface geometry to the current frame.
We demonstrate it can optimize sequences of varying frames with $6$ speedup over pure-based approaches.
arXiv Detail & Related papers (2023-11-14T13:39:01Z) - EndoSurf: Neural Surface Reconstruction of Deformable Tissues with
Stereo Endoscope Videos [72.59573904930419]
Reconstructing soft tissues from stereo endoscope videos is an essential prerequisite for many medical applications.
Previous methods struggle to produce high-quality geometry and appearance due to their inadequate representations of 3D scenes.
We propose a novel neural-field-based method, called EndoSurf, which effectively learns to represent a deforming surface from an RGBD sequence.
arXiv Detail & Related papers (2023-07-21T02:28:20Z) - Dynamic Point Fields [30.029872787758705]
We present a dynamic point field model that combines the representational benefits of explicit point-based graphics with implicit deformation networks.
We show the advantages of our dynamic point field framework in terms of its representational power, learning efficiency, and robustness to out-of-distribution novel poses.
arXiv Detail & Related papers (2023-04-05T17:52:37Z) - Neural Volumetric Mesh Generator [40.224769507878904]
We propose Neural Volumetric Mesh Generator(NVMG) which can generate novel and high-quality volumetric meshes.
Our pipeline can generate high-quality artifact-free volumetric and surface meshes from random noise or a reference image without any post-processing.
arXiv Detail & Related papers (2022-10-06T18:46:51Z) - NeuralMeshing: Differentiable Meshing of Implicit Neural Representations [63.18340058854517]
We propose a novel differentiable meshing algorithm for extracting surface meshes from neural implicit representations.
Our method produces meshes with regular tessellation patterns and fewer triangle faces compared to existing methods.
arXiv Detail & Related papers (2022-10-05T16:52:25Z) - Neural Template: Topology-aware Reconstruction and Disentangled
Generation of 3D Meshes [52.038346313823524]
This paper introduces a novel framework called DTNet for 3D mesh reconstruction and generation via Disentangled Topology.
Our method is able to produce high-quality meshes, particularly with diverse topologies, as compared with the state-of-the-art methods.
arXiv Detail & Related papers (2022-06-10T08:32:57Z) - Animatable Implicit Neural Representations for Creating Realistic
Avatars from Videos [63.16888987770885]
This paper addresses the challenge of reconstructing an animatable human model from a multi-view video.
We introduce a pose-driven deformation field based on the linear blend skinning algorithm.
We show that our approach significantly outperforms recent human modeling methods.
arXiv Detail & Related papers (2022-03-15T17:56:59Z) - Identity-Disentangled Neural Deformation Model for Dynamic Meshes [8.826835863410109]
We learn a neural deformation model that disentangles identity-induced shape variations from pose-dependent deformations using implicit neural functions.
We propose two methods to integrate global pose alignment with our neural deformation model.
Our method also outperforms traditional skeleton-driven models in reconstructing surface details such as palm prints or tendons without limitations from a fixed template.
arXiv Detail & Related papers (2021-09-30T17:43:06Z) - Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images [64.53227129573293]
We investigate the problem of learning to generate 3D parametric surface representations for novel object instances, as seen from one or more views.
We design neural networks capable of generating high-quality parametric 3D surfaces which are consistent between views.
Our method is supervised and trained on a public dataset of shapes from common object categories.
arXiv Detail & Related papers (2020-08-18T06:33:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.