Neuralangelo: High-Fidelity Neural Surface Reconstruction
- URL: http://arxiv.org/abs/2306.03092v2
- Date: Mon, 12 Jun 2023 20:50:07 GMT
- Title: Neuralangelo: High-Fidelity Neural Surface Reconstruction
- Authors: Zhaoshuo Li, Thomas M\"uller, Alex Evans, Russell H. Taylor, Mathias
Unberath, Ming-Yu Liu, Chen-Hsuan Lin
- Abstract summary: We present Neuralangelo, which combines the representation power of multi-resolution 3D hash grids with neural surface rendering.
Even without auxiliary inputs such as depth, Neuralangelo can effectively recover dense 3D surface structures from multi-view images with fidelity significantly surpassing previous methods.
- Score: 22.971952498343942
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural surface reconstruction has been shown to be powerful for recovering
dense 3D surfaces via image-based neural rendering. However, current methods
struggle to recover detailed structures of real-world scenes. To address the
issue, we present Neuralangelo, which combines the representation power of
multi-resolution 3D hash grids with neural surface rendering. Two key
ingredients enable our approach: (1) numerical gradients for computing
higher-order derivatives as a smoothing operation and (2) coarse-to-fine
optimization on the hash grids controlling different levels of details. Even
without auxiliary inputs such as depth, Neuralangelo can effectively recover
dense 3D surface structures from multi-view images with fidelity significantly
surpassing previous methods, enabling detailed large-scale scene reconstruction
from RGB video captures.
Related papers
- GSDF: 3DGS Meets SDF for Improved Rendering and Reconstruction [20.232177350064735]
We introduce a novel dual-branch architecture that combines the benefits of a flexible and efficient 3D Gaussian Splatting representation with neural Signed Distance Fields (SDF)
We show on diverse scenes that our design unlocks the potential for more accurate and detailed surface reconstructions.
arXiv Detail & Related papers (2024-03-25T17:22:11Z) - DiViNeT: 3D Reconstruction from Disparate Views via Neural Template
Regularization [7.488962492863031]
We present a volume rendering-based neural surface reconstruction method that takes as few as three disparate RGB images as input.
Our key idea is to regularize the reconstruction, which is severely ill-posed and leaving significant gaps between the sparse views.
Our approach achieves the best reconstruction quality among existing methods in the presence of such sparse views.
arXiv Detail & Related papers (2023-06-07T18:05:14Z) - Delicate Textured Mesh Recovery from NeRF via Adaptive Surface
Refinement [78.48648360358193]
We present a novel framework that generates textured surface meshes from images.
Our approach begins by efficiently initializing the geometry and view-dependency appearance with a NeRF.
We jointly refine the appearance with geometry and bake it into texture images for real-time rendering.
arXiv Detail & Related papers (2023-03-03T17:14:44Z) - Multi-View Mesh Reconstruction with Neural Deferred Shading [0.8514420632209809]
State-of-the-art methods use both neural surface representations and neural shading.
We represent surfaces as triangle meshes and build a differentiable rendering pipeline around triangle rendering and neural shading.
We evaluate our runtime on a public 3D reconstruction dataset and show that it can match the reconstruction accuracy of traditional baselines while surpassing them in optimization.
arXiv Detail & Related papers (2022-12-08T16:29:46Z) - Recovering Fine Details for Neural Implicit Surface Reconstruction [3.9702081347126943]
We present D-NeuS, a volume rendering neural implicit surface reconstruction method capable to recover fine geometry details.
We impose multi-view feature consistency on the surface points, derived by interpolating SDF zero-crossings from sampled points along rays.
Our method reconstructs high-accuracy surfaces with details, and outperforms the state of the art.
arXiv Detail & Related papers (2022-11-21T10:06:09Z) - MonoNeuralFusion: Online Monocular Neural 3D Reconstruction with
Geometric Priors [41.228064348608264]
This paper introduces a novel neural implicit scene representation with volume rendering for high-fidelity online 3D scene reconstruction from monocular videos.
For fine-grained reconstruction, our key insight is to incorporate geometric priors into both the neural implicit scene representation and neural volume rendering.
MonoNeuralFusion consistently generates much better complete and fine-grained reconstruction results, both quantitatively and qualitatively.
arXiv Detail & Related papers (2022-09-30T00:44:26Z) - Unbiased 4D: Monocular 4D Reconstruction with a Neural Deformation Model [76.64071133839862]
Capturing general deforming scenes from monocular RGB video is crucial for many computer graphics and vision applications.
Our method, Ub4D, handles large deformations, performs shape completion in occluded regions, and can operate on monocular RGB videos directly by using differentiable volume rendering.
Results on our new dataset, which will be made publicly available, demonstrate a clear improvement over the state of the art in terms of surface reconstruction accuracy and robustness to large deformations.
arXiv Detail & Related papers (2022-06-16T17:59:54Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z) - Fast-GANFIT: Generative Adversarial Network for High Fidelity 3D Face
Reconstruction [76.1612334630256]
We harness the power of Generative Adversarial Networks (GANs) and Deep Convolutional Neural Networks (DCNNs) to reconstruct the facial texture and shape from single images.
We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, facial texture reconstruction with high-frequency details.
arXiv Detail & Related papers (2021-05-16T16:35:44Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z) - Learning Deformable Tetrahedral Meshes for 3D Reconstruction [78.0514377738632]
3D shape representations that accommodate learning-based 3D reconstruction are an open problem in machine learning and computer graphics.
Previous work on neural 3D reconstruction demonstrated benefits, but also limitations, of point cloud, voxel, surface mesh, and implicit function representations.
We introduce Deformable Tetrahedral Meshes (DefTet) as a particular parameterization that utilizes volumetric tetrahedral meshes for the reconstruction problem.
arXiv Detail & Related papers (2020-11-03T02:57:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.