Shrinking: Reconstruction of Parameterized Surfaces from Signed Distance Fields
- URL: http://arxiv.org/abs/2410.03123v1
- Date: Fri, 4 Oct 2024 03:39:15 GMT
- Title: Shrinking: Reconstruction of Parameterized Surfaces from Signed Distance Fields
- Authors: Haotian Yin, Przemyslaw Musialski,
- Abstract summary: We propose a novel method for reconstructing explicit parameterized surfaces from Signed Distance Fields (SDFs)
Our approach iteratively contracts a parameterized initial sphere to conform to the target SDF shape, preserving differentiability and surface parameterization throughout.
This enables downstream applications such as texture mapping, geometry processing, animation, and finite element analysis.
- Score: 2.1638817206926855
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: We propose a novel method for reconstructing explicit parameterized surfaces from Signed Distance Fields (SDFs), a widely used implicit neural representation (INR) for 3D surfaces. While traditional reconstruction methods like Marching Cubes extract discrete meshes that lose the continuous and differentiable properties of INRs, our approach iteratively contracts a parameterized initial sphere to conform to the target SDF shape, preserving differentiability and surface parameterization throughout. This enables downstream applications such as texture mapping, geometry processing, animation, and finite element analysis. Evaluated on the typical geometric shapes and parts of the ABC dataset, our method achieves competitive reconstruction quality, maintaining smoothness and differentiability crucial for advanced computer graphics and geometric deep learning applications.
Related papers
- Geometry Distributions [51.4061133324376]
We propose a novel geometric data representation that models geometry as distributions.
Our approach uses diffusion models with a novel network architecture to learn surface point distributions.
We evaluate our representation qualitatively and quantitatively across various object types, demonstrating its effectiveness in achieving high geometric fidelity.
arXiv Detail & Related papers (2024-11-25T04:06:48Z) - AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction [55.69271635843385]
We present AniSDF, a novel approach that learns fused-granularity neural surfaces with physics-based encoding for high-fidelity 3D reconstruction.
Our method boosts the quality of SDF-based methods by a great scale in both geometry reconstruction and novel-view synthesis.
arXiv Detail & Related papers (2024-10-02T03:10:38Z) - ND-SDF: Learning Normal Deflection Fields for High-Fidelity Indoor Reconstruction [50.07671826433922]
It is non-trivial to simultaneously recover meticulous geometry and preserve smoothness across regions with differing characteristics.
We propose ND-SDF, which learns a Normal Deflection field to represent the angular deviation between the scene normal and the prior normal.
Our method not only obtains smooth weakly textured regions such as walls and floors but also preserves the geometric details of complex structures.
arXiv Detail & Related papers (2024-08-22T17:59:01Z) - Neural Vector Fields: Generalizing Distance Vector Fields by Codebooks
and Zero-Curl Regularization [73.3605319281966]
We propose a novel 3D representation, Neural Vector Fields (NVF), which adopts the explicit learning process to manipulate meshes and implicit unsigned distance function (UDF) representation to break the barriers in resolution and topology.
We evaluate both NVFs on four surface reconstruction scenarios, including watertight vs non-watertight shapes, category-agnostic reconstruction vs category-unseen reconstruction, category-specific, and cross-domain reconstruction.
arXiv Detail & Related papers (2023-09-04T10:42:56Z) - Hybrid-CSR: Coupling Explicit and Implicit Shape Representation for
Cortical Surface Reconstruction [28.31844964164312]
Hybrid-CSR is a geometric deep-learning model that combines explicit and implicit shape representations for cortical surface reconstruction.
Our method unifies explicit (oriented point clouds) and implicit (indicator function) cortical surface reconstruction.
arXiv Detail & Related papers (2023-07-23T11:32:14Z) - HR-NeuS: Recovering High-Frequency Surface Geometry via Neural Implicit
Surfaces [6.382138631957651]
We present High-Resolution NeuS, a novel neural implicit surface reconstruction method.
HR-NeuS recovers high-frequency surface geometry while maintaining large-scale reconstruction accuracy.
We demonstrate through experiments on DTU and BlendedMVS datasets that our approach produces 3D geometries that are qualitatively more detailed and quantitatively of similar accuracy compared to previous approaches.
arXiv Detail & Related papers (2023-02-14T02:25:16Z) - NeuralUDF: Learning Unsigned Distance Fields for Multi-view
Reconstruction of Surfaces with Arbitrary Topologies [87.06532943371575]
We present a novel method, called NeuralUDF, for reconstructing surfaces with arbitrary topologies from 2D images via volume rendering.
In this paper, we propose to represent surfaces as the Unsigned Distance Function (UDF) and develop a new volume rendering scheme to learn the neural UDF representation.
arXiv Detail & Related papers (2022-11-25T15:21:45Z) - Differentiable Rendering of Neural SDFs through Reparameterization [32.47993049026182]
We present a method to automatically compute correct gradients with respect to geometric scene parameters in neural SDFs.
Our approach builds on area-sampling techniques and develops a continuous warping function for SDFs to account for discontinuities.
Our differentiable can be used to optimize neural shapes from multi-view images and produces comparable 3D reconstructions.
arXiv Detail & Related papers (2022-06-10T20:30:26Z) - Learning Signed Distance Field for Multi-view Surface Reconstruction [24.090786783370195]
We introduce a novel neural surface reconstruction framework that leverages the knowledge of stereo matching and feature consistency.
We apply a signed distance field (SDF) and a surface light field to represent the scene geometry and appearance respectively.
Our method is able to improve the robustness of geometry estimation and support reconstruction of complex scene topologies.
arXiv Detail & Related papers (2021-08-23T06:23:50Z) - Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images [64.53227129573293]
We investigate the problem of learning to generate 3D parametric surface representations for novel object instances, as seen from one or more views.
We design neural networks capable of generating high-quality parametric 3D surfaces which are consistent between views.
Our method is supervised and trained on a public dataset of shapes from common object categories.
arXiv Detail & Related papers (2020-08-18T06:33:40Z) - Deep Manifold Prior [37.725563645899584]
We present a prior for manifold structured data, such as surfaces of 3D shapes, where deep neural networks are adopted to reconstruct a target shape using gradient descent.
We show that surfaces generated this way are smooth, with limiting behavior characterized by Gaussian processes, and we mathematically derive such properties for fully-connected as well as convolutional networks.
arXiv Detail & Related papers (2020-04-08T20:47:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.