3D Density-Gradient based Edge Detection on Neural Radiance Fields
(NeRFs) for Geometric Reconstruction
- URL: http://arxiv.org/abs/2309.14800v1
- Date: Tue, 26 Sep 2023 09:56:27 GMT
- Title: 3D Density-Gradient based Edge Detection on Neural Radiance Fields
(NeRFs) for Geometric Reconstruction
- Authors: Miriam J\"ager, Boris Jutzi
- Abstract summary: We show how to generate geometric 3D reconstructions from Neural Radiance Fields (NeRFs) using density gradients and edge detection filters.
Our approach demonstrates the capability to achieve geometric 3D reconstructions with high geometric accuracy on object surfaces and remarkable object completeness.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generating geometric 3D reconstructions from Neural Radiance Fields (NeRFs)
is of great interest. However, accurate and complete reconstructions based on
the density values are challenging. The network output depends on input data,
NeRF network configuration and hyperparameter. As a result, the direct usage of
density values, e.g. via filtering with global density thresholds, usually
requires empirical investigations. Under the assumption that the density
increases from non-object to object area, the utilization of density gradients
from relative values is evident. As the density represents a position-dependent
parameter it can be handled anisotropically, therefore processing of the
voxelized 3D density field is justified. In this regard, we address geometric
3D reconstructions based on density gradients, whereas the gradients result
from 3D edge detection filters of the first and second derivatives, namely
Sobel, Canny and Laplacian of Gaussian. The gradients rely on relative
neighboring density values in all directions, thus are independent from
absolute magnitudes. Consequently, gradient filters are able to extract edges
along a wide density range, almost independent from assumptions and empirical
investigations. Our approach demonstrates the capability to achieve geometric
3D reconstructions with high geometric accuracy on object surfaces and
remarkable object completeness. Notably, Canny filter effectively eliminates
gaps, delivers a uniform point density, and strikes a favorable balance between
correctness and completeness across the scenes.
Related papers
- Neural Gradient Learning and Optimization for Oriented Point Normal
Estimation [53.611206368815125]
We propose a deep learning approach to learn gradient vectors with consistent orientation from 3D point clouds for normal estimation.
We learn an angular distance field based on local plane geometry to refine the coarse gradient vectors.
Our method efficiently conducts global gradient approximation while achieving better accuracy and ability generalization of local feature description.
arXiv Detail & Related papers (2023-09-17T08:35:11Z) - D2NT: A High-Performing Depth-to-Normal Translator [14.936434857460622]
This paper presents a superfast depth-to-normal translator (D2NT) that can directly translate depth images into surface normal maps without calculating 3D coordinates.
We then propose a discontinuity-aware gradient filter (DAG) and a surface normal refinement module that can easily be integrated into any depth-to-normal SNEs.
Our proposed algorithm demonstrates the best accuracy among all other existing real-time SNEs and achieves the SoTA trade-off between efficiency and accuracy.
arXiv Detail & Related papers (2023-04-24T12:08:03Z) - Evaluate Geometry of Radiance Fields with Low-frequency Color Prior [27.741607821885673]
A radiance field is an effective representation of 3D scenes, which has been widely adopted in novel-view synthesis and 3D reconstruction.
It is still an open and challenging problem to evaluate the geometry, i.e., the density field, as the ground-truth is almost impossible to obtain.
We propose a novel metric, named Inverse Mean Residual Color (IMRC), which can evaluate the geometry only with the observation images.
arXiv Detail & Related papers (2023-04-10T02:02:57Z) - SketchSampler: Sketch-based 3D Reconstruction via View-dependent Depth
Sampling [75.957103837167]
Reconstructing a 3D shape based on a single sketch image is challenging due to the large domain gap between a sparse, irregular sketch and a regular, dense 3D shape.
Existing works try to employ the global feature extracted from sketch to directly predict the 3D coordinates, but they usually suffer from losing fine details that are not faithful to the input sketch.
arXiv Detail & Related papers (2022-08-14T16:37:51Z) - Neural Density-Distance Fields [9.742650275132029]
This paper proposes Neural Density-Distance Field (NeDDF), a novel 3D representation that reciprocally constrains the distance and density fields.
We extend distance field formulation to shapes with no explicit boundary surface, such as fur or smoke, which enable explicit conversion from distance field to density field.
Experiments show that NeDDF can achieve high localization performance while providing comparable results to NeRF on novel view synthesis.
arXiv Detail & Related papers (2022-07-29T03:13:25Z) - Point Density-Aware Voxels for LiDAR 3D Object Detection [8.136649838488042]
Point Density-Aware Voxel network (PDV) is an end-to-end two stage LiDAR 3D object detection architecture.
PDV efficiently localizes voxel features from the 3D sparse backbone through voxel point centroids.
PDV outperforms all state-of-the-art methods on the Open dataset.
arXiv Detail & Related papers (2022-03-10T22:11:06Z) - Volume Rendering of Neural Implicit Surfaces [57.802056954935495]
This paper aims to improve geometry representation and reconstruction in neural volume rendering.
We achieve that by modeling the volume density as a function of the geometry.
Applying this new density representation to challenging scene multiview datasets produced high quality geometry reconstructions.
arXiv Detail & Related papers (2021-06-22T20:23:16Z) - DeepMesh: Differentiable Iso-Surface Extraction [53.77622255726208]
We introduce a differentiable way to produce explicit surface mesh representations from Deep Implicit Fields.
Our key insight is that by reasoning on how implicit field perturbations impact local surface geometry, one can ultimately differentiate the 3D location of surface samples.
We exploit this to define DeepMesh -- end-to-end differentiable mesh representation that can vary its topology.
arXiv Detail & Related papers (2021-06-20T20:12:41Z) - GeoNet++: Iterative Geometric Neural Network with Edge-Aware Refinement
for Joint Depth and Surface Normal Estimation [204.13451624763735]
We propose a geometric neural network with edge-aware refinement (GeoNet++) to jointly predict both depth and surface normal maps from a single image.
GeoNet++ effectively predicts depth and surface normals with strong 3D consistency and sharp boundaries.
In contrast to current metrics that focus on evaluating pixel-wise error/accuracy, 3DGM measures whether the predicted depth can reconstruct high-quality 3D surface normals.
arXiv Detail & Related papers (2020-12-13T06:48:01Z) - PUGeo-Net: A Geometry-centric Network for 3D Point Cloud Upsampling [103.09504572409449]
We propose a novel deep neural network based method, called PUGeo-Net, to generate uniform dense point clouds.
Thanks to its geometry-centric nature, PUGeo-Net works well for both CAD models with sharp features and scanned models with rich geometric details.
arXiv Detail & Related papers (2020-02-24T14:13:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.