Thin-Shell-SfT: Fine-Grained Monocular Non-rigid 3D Surface Tracking with Neural Deformation Fields
- URL: http://arxiv.org/abs/2503.19976v1
- Date: Tue, 25 Mar 2025 18:00:46 GMT
- Title: Thin-Shell-SfT: Fine-Grained Monocular Non-rigid 3D Surface Tracking with Neural Deformation Fields
- Authors: Navami Kairanda, Marc Habermann, Shanthika Naik, Christian Theobalt, Vladislav Golyanik,
- Abstract summary: 3D reconstruction of deformable surfaces from RGB videos is a challenging problem.<n>Existing methods use deformation models with statistical, neural, or physical priors.<n>We propose ThinShell-SfT, a new method for non-rigid 3D tracking meshes.
- Score: 66.1612475655465
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: 3D reconstruction of highly deformable surfaces (e.g. cloths) from monocular RGB videos is a challenging problem, and no solution provides a consistent and accurate recovery of fine-grained surface details. To account for the ill-posed nature of the setting, existing methods use deformation models with statistical, neural, or physical priors. They also predominantly rely on nonadaptive discrete surface representations (e.g. polygonal meshes), perform frame-by-frame optimisation leading to error propagation, and suffer from poor gradients of the mesh-based differentiable renderers. Consequently, fine surface details such as cloth wrinkles are often not recovered with the desired accuracy. In response to these limitations, we propose ThinShell-SfT, a new method for non-rigid 3D tracking that represents a surface as an implicit and continuous spatiotemporal neural field. We incorporate continuous thin shell physics prior based on the Kirchhoff-Love model for spatial regularisation, which starkly contrasts the discretised alternatives of earlier works. Lastly, we leverage 3D Gaussian splatting to differentiably render the surface into image space and optimise the deformations based on analysis-bysynthesis principles. Our Thin-Shell-SfT outperforms prior works qualitatively and quantitatively thanks to our continuous surface formulation in conjunction with a specially tailored simulation prior and surface-induced 3D Gaussians. See our project page at https://4dqv.mpiinf.mpg.de/ThinShellSfT.
Related papers
- GeoSplatting: Towards Geometry Guided Gaussian Splatting for Physically-based Inverse Rendering [69.67264955234494]
GeoSplatting is a novel hybrid representation that augments 3DGS with explicit geometric guidance and differentiable PBR equations.
Comprehensive evaluations across diverse datasets demonstrate the superiority of GeoSplatting.
arXiv Detail & Related papers (2024-10-31T17:57:07Z) - DreamMesh4D: Video-to-4D Generation with Sparse-Controlled Gaussian-Mesh Hybrid Representation [10.250715657201363]
We introduce DreamMesh4D, a novel framework combining mesh representation with geometric skinning technique to generate high-quality 4D object from a monocular video.
Our method is compatible with modern graphic pipelines, showcasing its potential in the 3D gaming and film industry.
arXiv Detail & Related papers (2024-10-09T10:41:08Z) - ND-SDF: Learning Normal Deflection Fields for High-Fidelity Indoor Reconstruction [50.07671826433922]
It is non-trivial to simultaneously recover meticulous geometry and preserve smoothness across regions with differing characteristics.
We propose ND-SDF, which learns a Normal Deflection field to represent the angular deviation between the scene normal and the prior normal.
Our method not only obtains smooth weakly textured regions such as walls and floors but also preserves the geometric details of complex structures.
arXiv Detail & Related papers (2024-08-22T17:59:01Z) - Flatten Anything: Unsupervised Neural Surface Parameterization [76.4422287292541]
We introduce the Flatten Anything Model (FAM), an unsupervised neural architecture to achieve global free-boundary surface parameterization.
Compared with previous methods, our FAM directly operates on discrete surface points without utilizing connectivity information.
Our FAM is fully-automated without the need for pre-cutting and can deal with highly-complex topologies.
arXiv Detail & Related papers (2024-05-23T14:39:52Z) - Gaussian Opacity Fields: Efficient Adaptive Surface Reconstruction in Unbounded Scenes [50.92217884840301]
Gaussian Opacity Fields (GOF) is a novel approach for efficient, high-quality, and adaptive surface reconstruction in scenes.
GOF is derived from ray-tracing-based volume rendering of 3D Gaussians.
GOF surpasses existing 3DGS-based methods in surface reconstruction and novel view synthesis.
arXiv Detail & Related papers (2024-04-16T17:57:19Z) - NeuSG: Neural Implicit Surface Reconstruction with 3D Gaussian Splatting Guidance [48.72360034876566]
We propose a neural implicit surface reconstruction pipeline with guidance from 3D Gaussian Splatting to recover highly detailed surfaces.<n>The advantage of 3D Gaussian Splatting is that it can generate dense point clouds with detailed structure.<n>We introduce a scale regularizer to pull the centers close to the surface by enforcing the 3D Gaussians to be extremely thin.
arXiv Detail & Related papers (2023-12-01T07:04:47Z) - DynamicSurf: Dynamic Neural RGB-D Surface Reconstruction with an
Optimizable Feature Grid [7.702806654565181]
DynamicSurf is a model-free neural implicit surface reconstruction method for high-fidelity 3D modelling of non-rigid surfaces from monocular RGB-D video.
We learn a neural deformation field that maps a canonical representation of the surface geometry to the current frame.
We demonstrate it can optimize sequences of varying frames with $6$ speedup over pure-based approaches.
arXiv Detail & Related papers (2023-11-14T13:39:01Z) - NSF: Neural Surface Fields for Human Modeling from Monocular Depth [46.928496022657185]
It is challenging to model dynamic and fine-grained clothing deformations from sparse data.
Existing methods for modeling 3D humans from depth data have limitations in terms of computational efficiency, mesh coherency, and flexibility in resolution and topology.
We propose a novel method Neural Surface Fields for modeling 3D clothed humans from monocular depth.
arXiv Detail & Related papers (2023-08-28T19:08:17Z) - Neural Volumetric Mesh Generator [40.224769507878904]
We propose Neural Volumetric Mesh Generator(NVMG) which can generate novel and high-quality volumetric meshes.
Our pipeline can generate high-quality artifact-free volumetric and surface meshes from random noise or a reference image without any post-processing.
arXiv Detail & Related papers (2022-10-06T18:46:51Z) - {\phi}-SfT: Shape-from-Template with a Physics-Based Deformation Model [69.27632025495512]
Shape-from-Template (SfT) methods estimate 3D surface deformations from a single monocular RGB camera.
This paper proposes a new SfT approach explaining 2D observations through physical simulations.
arXiv Detail & Related papers (2022-03-22T17:59:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.