Neural RGB-D Surface Reconstruction
- URL: http://arxiv.org/abs/2104.04532v1
- Date: Fri, 9 Apr 2021 18:00:01 GMT
- Title: Neural RGB-D Surface Reconstruction
- Authors: Dejan Azinovi\'c, Ricardo Martin-Brualla, Dan B Goldman, Matthias
Nie{\ss}ner, Justus Thies
- Abstract summary: Methods which learn a neural radiance field have shown amazing image synthesis results, but the underlying geometry representation is only a coarse approximation of the real geometry.
We demonstrate how depth measurements can be incorporated into the radiance field formulation to produce more detailed and complete reconstruction results.
- Score: 15.438678277705424
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we explore how to leverage the success of implicit novel view
synthesis methods for surface reconstruction. Methods which learn a neural
radiance field have shown amazing image synthesis results, but the underlying
geometry representation is only a coarse approximation of the real geometry. We
demonstrate how depth measurements can be incorporated into the radiance field
formulation to produce more detailed and complete reconstruction results than
using methods based on either color or depth data alone. In contrast to a
density field as the underlying geometry representation, we propose to learn a
deep neural network which stores a truncated signed distance field. Using this
representation, we show that one can still leverage differentiable volume
rendering to estimate color values of the observed images during training to
compute a reconstruction loss. This is beneficial for learning the signed
distance field in regions with missing depth measurements. Furthermore, we
correct misalignment errors of the camera, improving the overall reconstruction
quality. In several experiments, we showcase our method and compare to existing
works on classical RGB-D fusion and learned representations.
Related papers
- ANIM: Accurate Neural Implicit Model for Human Reconstruction from a single RGB-D image [40.03212588672639]
ANIM is a novel method that reconstructs arbitrary 3D human shapes from single-view RGB-D images with an unprecedented level of accuracy.
Our model learns geometric details from both pixel-aligned and voxel-aligned features to leverage depth information.
Experiments demonstrate that ANIM outperforms state-of-the-art works that use RGB, surface normals, point cloud or RGB-D data as input.
arXiv Detail & Related papers (2024-03-15T14:45:38Z) - DNS SLAM: Dense Neural Semantic-Informed SLAM [92.39687553022605]
DNS SLAM is a novel neural RGB-D semantic SLAM approach featuring a hybrid representation.
Our method integrates multi-view geometry constraints with image-based feature extraction to improve appearance details.
Our experimental results achieve state-of-the-art performance on both synthetic data and real-world data tracking.
arXiv Detail & Related papers (2023-11-30T21:34:44Z) - Depth-NeuS: Neural Implicit Surfaces Learning for Multi-view
Reconstruction Based on Depth Information Optimization [6.493546601668505]
Methods for neural surface representation and rendering, for example NeuS, have shown that learning neural implicit surfaces through volume rendering is becoming increasingly popular.
Existing methods lack a direct representation of depth information, which makes object reconstruction unrestricted by geometric features.
This is because existing methods only use surface normals to represent implicit surfaces without using depth information.
We propose a neural implicit surface learning method called Depth-NeuS based on depth information optimization for multi-view reconstruction.
arXiv Detail & Related papers (2023-03-30T01:19:27Z) - NeuralUDF: Learning Unsigned Distance Fields for Multi-view
Reconstruction of Surfaces with Arbitrary Topologies [87.06532943371575]
We present a novel method, called NeuralUDF, for reconstructing surfaces with arbitrary topologies from 2D images via volume rendering.
In this paper, we propose to represent surfaces as the Unsigned Distance Function (UDF) and develop a new volume rendering scheme to learn the neural UDF representation.
arXiv Detail & Related papers (2022-11-25T15:21:45Z) - Neural Implicit Surface Reconstruction using Imaging Sonar [38.73010653104763]
We present a technique for dense 3D reconstruction of objects using an imaging sonar, also known as forward-looking sonar (FLS)
Compared to previous methods that model the scene geometry as point clouds or volumetric grids, we represent geometry as a neural implicit function.
We perform experiments on real and synthetic datasets and show that our algorithm reconstructs high-fidelity surface geometry from multi-view FLS images at much higher quality than was possible with previous techniques and without suffering from their associated memory overhead.
arXiv Detail & Related papers (2022-09-17T02:23:09Z) - Multi-View Reconstruction using Signed Ray Distance Functions (SRDF) [22.75986869918975]
We investigate a new computational approach that builds on a novel shape representation that is volumetric.
The shape energy associated to this representation evaluates 3D geometry given color images and does not need appearance prediction.
In practice we propose an implicit shape representation, the SRDF, based on signed distances which we parameterize by depths along camera rays.
arXiv Detail & Related papers (2022-08-31T19:32:17Z) - Geo-NI: Geometry-aware Neural Interpolation for Light Field Rendering [57.775678643512435]
We present a Geometry-aware Neural Interpolation (Geo-NI) framework for light field rendering.
By combining the superiorities of NI and DIBR, the proposed Geo-NI is able to render views with large disparity.
arXiv Detail & Related papers (2022-06-20T12:25:34Z) - Learning Signed Distance Field for Multi-view Surface Reconstruction [24.090786783370195]
We introduce a novel neural surface reconstruction framework that leverages the knowledge of stereo matching and feature consistency.
We apply a signed distance field (SDF) and a surface light field to represent the scene geometry and appearance respectively.
Our method is able to improve the robustness of geometry estimation and support reconstruction of complex scene topologies.
arXiv Detail & Related papers (2021-08-23T06:23:50Z) - Volume Rendering of Neural Implicit Surfaces [57.802056954935495]
This paper aims to improve geometry representation and reconstruction in neural volume rendering.
We achieve that by modeling the volume density as a function of the geometry.
Applying this new density representation to challenging scene multiview datasets produced high quality geometry reconstructions.
arXiv Detail & Related papers (2021-06-22T20:23:16Z) - Learning Topology from Synthetic Data for Unsupervised Depth Completion [66.26787962258346]
We present a method for inferring dense depth maps from images and sparse depth measurements.
We learn the association of sparse point clouds with dense natural shapes, and using the image as evidence to validate the predicted depth map.
arXiv Detail & Related papers (2021-06-06T00:21:12Z) - DeepSurfels: Learning Online Appearance Fusion [77.59420353185355]
DeepSurfels is a novel hybrid scene representation for geometry and appearance information.
In contrast to established representations, DeepSurfels better represents high-frequency textures.
We present an end-to-end trainable online appearance fusion pipeline.
arXiv Detail & Related papers (2020-12-28T14:13:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.