Geometric implicit neural representations for signed distance functions
- URL: http://arxiv.org/abs/2511.07206v1
- Date: Mon, 10 Nov 2025 15:33:02 GMT
- Title: Geometric implicit neural representations for signed distance functions
- Authors: Luiz Schirmer, Tiago Novello, Vinícius da Silva, Guilherme Schardong, Daniel Perazzo, Hélio Lopes, Nuno Gonçalves, Luiz Velho,
- Abstract summary: Implicit neural representations (INRs) have emerged as a promising framework for representing signals in low-dimensional spaces.<n>This survey reviews the existing literature on the specialized INR problem of approximating textitsigned distance functions (SDFs)<n>We refer to neural SDFs that incorporate differential geometry tools, such as normals and curvatures, in their loss functions as textitgeometric INRs.
- Score: 9.015149670090144
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: \textit{Implicit neural representations} (INRs) have emerged as a promising framework for representing signals in low-dimensional spaces. This survey reviews the existing literature on the specialized INR problem of approximating \textit{signed distance functions} (SDFs) for surface scenes, using either oriented point clouds or a set of posed images. We refer to neural SDFs that incorporate differential geometry tools, such as normals and curvatures, in their loss functions as \textit{geometric} INRs. The key idea behind this 3D reconstruction approach is to include additional \textit{regularization} terms in the loss function, ensuring that the INR satisfies certain global properties that the function should hold -- such as having unit gradient in the case of SDFs. We explore key methodological components, including the definition of INR, the construction of geometric loss functions, and sampling schemes from a differential geometry perspective. Our review highlights the significant advancements enabled by geometric INRs in surface reconstruction from oriented point clouds and posed images.
Related papers
- Geometric Neural Process Fields [58.77241763774756]
Geometric Neural Process Fields (G-NPF) is a probabilistic framework for neural radiance fields that explicitly captures uncertainty.<n>Building on these bases, we design a hierarchical latent variable model, allowing G-NPF to integrate structural information across multiple spatial levels.<n> Experiments on novel-view synthesis for 3D scenes, as well as 2D image and 1D signal regression, demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2025-02-04T14:17:18Z) - Optimizing 3D Geometry Reconstruction from Implicit Neural Representations [2.3940819037450987]
Implicit neural representations have emerged as a powerful tool in learning 3D geometry.
We present a novel approach that both reduces computational expenses and enhances the capture of fine details.
arXiv Detail & Related papers (2024-10-16T16:36:23Z) - Shrinking: Reconstruction of Parameterized Surfaces from Signed Distance Fields [2.1638817206926855]
We propose a novel method for reconstructing explicit parameterized surfaces from Signed Distance Fields (SDFs)
Our approach iteratively contracts a parameterized initial sphere to conform to the target SDF shape, preserving differentiability and surface parameterization throughout.
This enables downstream applications such as texture mapping, geometry processing, animation, and finite element analysis.
arXiv Detail & Related papers (2024-10-04T03:39:15Z) - AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction [55.69271635843385]
We present AniSDF, a novel approach that learns fused-granularity neural surfaces with physics-based encoding for high-fidelity 3D reconstruction.<n>Our method boosts the quality of SDF-based methods by a great scale in both geometry reconstruction and novel-view synthesis.
arXiv Detail & Related papers (2024-10-02T03:10:38Z) - InfoNorm: Mutual Information Shaping of Normals for Sparse-View Reconstruction [15.900375207144759]
3D surface reconstruction from multi-view images is essential for scene understanding and interaction.
Recent implicit surface representations, such as Neural Radiance Fields (NeRFs) and signed distance functions (SDFs) employ various geometric priors to resolve the lack of observed information.
We propose regularizing the geometric modeling by explicitly encouraging the mutual information among surface normals of highly correlated scene points.
arXiv Detail & Related papers (2024-07-17T15:46:25Z) - GeoGaussian: Geometry-aware Gaussian Splatting for Scene Rendering [83.19049705653072]
During the Gaussian Splatting optimization process, the scene's geometry can gradually deteriorate if its structure is not deliberately preserved.
We propose a novel approach called GeoGaussian to mitigate this issue.
Our proposed pipeline achieves state-of-the-art performance in novel view synthesis and geometric reconstruction.
arXiv Detail & Related papers (2024-03-17T20:06:41Z) - ParaPoint: Learning Global Free-Boundary Surface Parameterization of 3D Point Clouds [52.03819676074455]
ParaPoint is an unsupervised neural learning pipeline for achieving global free-boundary surface parameterization.
This work makes the first attempt to investigate neural point cloud parameterization that pursues both global mappings and free boundaries.
arXiv Detail & Related papers (2024-03-15T14:35:05Z) - NeuralUDF: Learning Unsigned Distance Fields for Multi-view
Reconstruction of Surfaces with Arbitrary Topologies [87.06532943371575]
We present a novel method, called NeuralUDF, for reconstructing surfaces with arbitrary topologies from 2D images via volume rendering.
In this paper, we propose to represent surfaces as the Unsigned Distance Function (UDF) and develop a new volume rendering scheme to learn the neural UDF representation.
arXiv Detail & Related papers (2022-11-25T15:21:45Z) - PVSeRF: Joint Pixel-, Voxel- and Surface-Aligned Radiance Field for
Single-Image Novel View Synthesis [52.546998369121354]
We present PVSeRF, a learning framework that reconstructs neural radiance fields from single-view RGB images.
We propose to incorporate explicit geometry reasoning and combine it with pixel-aligned features for radiance field prediction.
We show that the introduction of such geometry-aware features helps to achieve a better disentanglement between appearance and geometry.
arXiv Detail & Related papers (2022-02-10T07:39:47Z) - Phase Transitions, Distance Functions, and Implicit Neural
Representations [26.633795221150475]
Implicit Neural Representations (INRs) serve numerous downstream applications in geometric deep learning and 3D vision.
We suggest a loss for training INRs that learns a density function that converges to a proper occupancy function, while its log transform converges to a distance function.
arXiv Detail & Related papers (2021-06-14T18:13:45Z) - High-Order Residual Network for Light Field Super-Resolution [39.93400777363467]
Plenoptic cameras usually sacrifice the spatial resolution of their SAIss to acquire information from different viewpoints.
We propose a novel high-order residual network to learn the geometric features hierarchically from the light field for reconstruction.
Our approach enables high-quality reconstruction even in challenging regions and outperforms state-of-the-art single image or LF reconstruction methods with both quantitative measurements and visual evaluation.
arXiv Detail & Related papers (2020-03-29T18:06:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.