Explicit Neural Surfaces: Learning Continuous Geometry With Deformation
Fields
- URL: http://arxiv.org/abs/2306.02956v3
- Date: Mon, 11 Dec 2023 06:05:34 GMT
- Title: Explicit Neural Surfaces: Learning Continuous Geometry With Deformation
Fields
- Authors: Thomas Walker, Octave Mariotti, Amir Vaxman, Hakan Bilen
- Abstract summary: We introduce Explicit Neural Surfaces (ENS), an efficient smooth surface representation that encodes topology with a deformation field from a known base domain.
Compared to implicit surfaces, ENS trains faster and has several orders of magnitude faster inference times.
- Score: 33.38609930708073
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We introduce Explicit Neural Surfaces (ENS), an efficient smooth surface
representation that directly encodes topology with a deformation field from a
known base domain. We apply this representation to reconstruct explicit
surfaces from multiple views, where we use a series of neural deformation
fields to progressively transform the base domain into a target shape. By using
meshes as discrete surface proxies, we train the deformation fields through
efficient differentiable rasterization. Using a fixed base domain allows us to
have Laplace-Beltrami eigenfunctions as an intrinsic positional encoding
alongside standard extrinsic Fourier features, with which our approach can
capture fine surface details. Compared to implicit surfaces, ENS trains faster
and has several orders of magnitude faster inference times. The explicit nature
of our approach also allows higher-quality mesh extraction whilst maintaining
competitive surface reconstruction performance and real-time capabilities.
Related papers
- SpaceMesh: A Continuous Representation for Learning Manifold Surface Meshes [61.110517195874074]
We present a scheme to directly generate manifold, polygonal meshes of complex connectivity as the output of a neural network.
Our key innovation is to define a continuous latent connectivity space at each mesh, which implies the discrete mesh.
In applications, this approach not only yields high-quality outputs from generative models, but also enables directly learning challenging geometry processing tasks such as mesh repair.
arXiv Detail & Related papers (2024-09-30T17:59:03Z) - ND-SDF: Learning Normal Deflection Fields for High-Fidelity Indoor Reconstruction [50.07671826433922]
It is non-trivial to simultaneously recover meticulous geometry and preserve smoothness across regions with differing characteristics.
We propose ND-SDF, which learns a Normal Deflection field to represent the angular deviation between the scene normal and the prior normal.
Our method not only obtains smooth weakly textured regions such as walls and floors but also preserves the geometric details of complex structures.
arXiv Detail & Related papers (2024-08-22T17:59:01Z) - Implicit-ARAP: Efficient Handle-Guided Deformation of High-Resolution Meshes and Neural Fields via Local Patch Meshing [18.353444950896527]
We present the local patch mesh representation for neural signed distance fields.
This technique allows to discretize local regions of the level sets of an input SDF by projecting and deforming flat patch meshes onto the level set surface.
We introduce two distinct pipelines, which make use of 3D neural fields to compute As-Rigid-As-Possible deformations of both high-resolution meshes and neural fields.
arXiv Detail & Related papers (2024-05-21T16:04:32Z) - DynoSurf: Neural Deformation-based Temporally Consistent Dynamic Surface Reconstruction [93.18586302123633]
This paper explores the problem of reconstructing temporally consistent surfaces from a 3D point cloud sequence without correspondence.
We propose DynoSurf, an unsupervised learning framework integrating a template surface representation with a learnable deformation field.
Experimental results demonstrate the significant superiority of DynoSurf over current state-of-the-art approaches.
arXiv Detail & Related papers (2024-03-18T08:58:48Z) - Surface Normal Estimation with Transformers [11.198936434401382]
We propose a Transformer to accurately predict normals from point clouds with noise and density variations.
Our method achieves state-of-the-art performance on both the synthetic shape dataset PCPNet, and the real-world indoor scene PCPNN.
arXiv Detail & Related papers (2024-01-11T08:52:13Z) - Unsupervised Multimodal Surface Registration with Geometric Deep
Learning [3.3403308469369577]
GeoMorph is a novel geometric deep-learning framework designed for image registration of cortical surfaces.
We show that GeoMorph surpasses existing deep-learning methods by achieving improved alignment with smoother deformations.
Such versatility and robustness suggest strong potential for various neuroscience applications.
arXiv Detail & Related papers (2023-11-21T22:05:00Z) - HSurf-Net: Normal Estimation for 3D Point Clouds by Learning Hyper
Surfaces [54.77683371400133]
We propose a novel normal estimation method called HSurf-Net, which can accurately predict normals from point clouds with noise and density variations.
Experimental results show that our HSurf-Net achieves the state-of-the-art performance on the synthetic shape dataset.
arXiv Detail & Related papers (2022-10-13T16:39:53Z) - Minimal Neural Atlas: Parameterizing Complex Surfaces with Minimal
Charts and Distortion [71.52576837870166]
We present Minimal Neural Atlas, a novel atlas-based explicit neural surface representation.
At its core is a fully learnable parametric domain, given by an implicit probabilistic occupancy field defined on an open square of the parametric space.
Our reconstructions are more accurate in terms of the overall geometry, due to the separation of concerns on topology and geometry.
arXiv Detail & Related papers (2022-07-29T16:55:06Z) - Sign-Agnostic CONet: Learning Implicit Surface Reconstructions by
Sign-Agnostic Optimization of Convolutional Occupancy Networks [39.65056638604885]
We learn implicit surface reconstruction by sign-agnostic optimization of convolutional occupancy networks.
We show this goal can be effectively achieved by a simple yet effective design.
arXiv Detail & Related papers (2021-05-08T03:35:32Z) - Neural Subdivision [58.97214948753937]
This paper introduces Neural Subdivision, a novel framework for data-driven coarseto-fine geometry modeling.
We optimize for the same set of network weights across all local mesh patches, thus providing an architecture that is not constrained to a specific input mesh, fixed genus, or category.
We demonstrate that even when trained on a single high-resolution mesh our method generates reasonable subdivisions for novel shapes.
arXiv Detail & Related papers (2020-05-04T20:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.