Zero-Level-Set Encoder for Neural Distance Fields
- URL: http://arxiv.org/abs/2310.06644v2
- Date: Mon, 5 Feb 2024 15:02:32 GMT
- Title: Zero-Level-Set Encoder for Neural Distance Fields
- Authors: Stefan Rhys Jeske and Jonathan Klein and Dominik L. Michels and Jan
Bender
- Abstract summary: We present a novel encoder-decoder neural network for embedding 3D shapes in a single forward pass.
The network is trained to solve the Eikonal equation and only requires knowledge of the zero-level set for training and inference.
- Score: 10.269224726391807
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural shape representation generally refers to representing 3D geometry
using neural networks, e.g., to compute a signed distance or occupancy value at
a specific spatial position. In this paper, we present a novel encoder-decoder
neural network for embedding 3D shapes in a single forward pass. Our
architecture is based on a multi-scale hybrid system incorporating graph-based
and voxel-based components, as well as a continuously differentiable decoder.
Furthermore, the network is trained to solve the Eikonal equation and only
requires knowledge of the zero-level set for training and inference. This means
that in contrast to most previous work, our network is able to output valid
signed distance fields without explicit prior knowledge of non-zero distance
values or shape occupancy. We further propose a modification of the loss
function in case that surface normals are not well defined, e.g., in the
context of non-watertight surfaces and non-manifold geometry. Overall, this can
help reduce the computational overhead of training and evaluating neural
distance fields, as well as enabling the application to difficult shapes. We
finally demonstrate the efficacy, generalizability and scalability of our
method on datasets consisting of deforming shapes, both based on simulated data
and raw 3D scans. We further show single-class and multi-class encoding, on
both fixed and variable vertex-count inputs, showcasing a wide range of
possible applications.
Related papers
- OReX: Object Reconstruction from Planar Cross-sections Using Neural
Fields [10.862993171454685]
OReX is a method for 3D shape reconstruction from slices alone, featuring a Neural Field gradients as the prior.
A modest neural network is trained on the input planes to return an inside/outside estimate for a given 3D coordinate, yielding a powerful prior that induces smoothness and self-similarities.
We offer an iterative estimation architecture and a hierarchical input sampling scheme that encourage coarse-to-fine training, allowing the training process to focus on high frequencies at later stages.
arXiv Detail & Related papers (2022-11-23T11:44:35Z) - GraphCSPN: Geometry-Aware Depth Completion via Dynamic GCNs [49.55919802779889]
We propose a Graph Convolution based Spatial Propagation Network (GraphCSPN) as a general approach for depth completion.
In this work, we leverage convolution neural networks as well as graph neural networks in a complementary way for geometric representation learning.
Our method achieves the state-of-the-art performance, especially when compared in the case of using only a few propagation steps.
arXiv Detail & Related papers (2022-10-19T17:56:03Z) - Dual Octree Graph Networks for Learning Adaptive Volumetric Shape
Representations [21.59311861556396]
Our method encodes the volumetric field of a 3D shape with an adaptive feature volume organized by an octree.
An encoder-decoder network is designed to learn the adaptive feature volume based on the graph convolutions over the dual graph of octree nodes.
Our method effectively encodes shape details, enables fast 3D shape reconstruction, and exhibits good generality for modeling 3D shapes out of training categories.
arXiv Detail & Related papers (2022-05-05T17:56:34Z) - Learning Smooth Neural Functions via Lipschitz Regularization [92.42667575719048]
We introduce a novel regularization designed to encourage smooth latent spaces in neural fields.
Compared with prior Lipschitz regularized networks, ours is computationally fast and can be implemented in four lines of code.
arXiv Detail & Related papers (2022-02-16T21:24:54Z) - HyperCube: Implicit Field Representations of Voxelized 3D Models [18.868266675878996]
We introduce a new HyperCube architecture that enables direct processing of 3D voxels.
Instead of processing individual 3D samples from within a voxel, our approach allows to input the entire voxel represented with its convex hull coordinates.
arXiv Detail & Related papers (2021-10-12T06:56:48Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z) - Exploring Deep 3D Spatial Encodings for Large-Scale 3D Scene
Understanding [19.134536179555102]
We propose an alternative approach to overcome the limitations of CNN based approaches by encoding the spatial features of raw 3D point clouds into undirected graph models.
The proposed method achieves on par state-of-the-art accuracy with improved training time and model stability thus indicating strong potential for further research.
arXiv Detail & Related papers (2020-11-29T12:56:19Z) - Neural-Pull: Learning Signed Distance Functions from Point Clouds by
Learning to Pull Space onto Surfaces [68.12457459590921]
Reconstructing continuous surfaces from 3D point clouds is a fundamental operation in 3D geometry processing.
We introduce textitNeural-Pull, a new approach that is simple and leads to high quality SDFs.
arXiv Detail & Related papers (2020-11-26T23:18:10Z) - On the Effectiveness of Weight-Encoded Neural Implicit 3D Shapes [38.13954772608884]
A neural implicit outputs a number indicating whether the given query point in space is inside, outside, or on a surface.
Prior works have focused on _latent-encoded_ neural implicits, where a latent vector encoding of a specific shape is also fed as input.
A _weight-encoded_ neural implicit may forgo the latent vector and focus reconstruction accuracy on the details of a single shape.
arXiv Detail & Related papers (2020-09-17T23:10:19Z) - Learning Local Neighboring Structure for Robust 3D Shape Representation [143.15904669246697]
Representation learning for 3D meshes is important in many computer vision and graphics applications.
We propose a local structure-aware anisotropic convolutional operation (LSA-Conv)
Our model produces significant improvement in 3D shape reconstruction compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-04-21T13:40:03Z) - Implicit Functions in Feature Space for 3D Shape Reconstruction and
Completion [53.885984328273686]
Implicit Feature Networks (IF-Nets) deliver continuous outputs, can handle multiple topologies, and complete shapes for missing or sparse input data.
IF-Nets clearly outperform prior work in 3D object reconstruction in ShapeNet, and obtain significantly more accurate 3D human reconstructions.
arXiv Detail & Related papers (2020-03-03T11:14:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.