Learning Neural Radiance Fields from Multi-View Geometry
- URL: http://arxiv.org/abs/2210.13041v1
- Date: Mon, 24 Oct 2022 08:53:35 GMT
- Title: Learning Neural Radiance Fields from Multi-View Geometry
- Authors: Marco Orsingher, Paolo Zani, Paolo Medici, Massimo Bertozzi
- Abstract summary: We present a framework, called MVG-NeRF, that combines Multi-View Geometry algorithms and Neural Radiance Fields (NeRF) for image-based 3D reconstruction.
NeRF has revolutionized the field of implicit 3D representations, mainly due to a differentiable rendering formulation that enables high-quality and geometry-aware novel view synthesis.
- Score: 1.1011268090482573
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a framework, called MVG-NeRF, that combines classical Multi-View
Geometry algorithms and Neural Radiance Fields (NeRF) for image-based 3D
reconstruction. NeRF has revolutionized the field of implicit 3D
representations, mainly due to a differentiable volumetric rendering
formulation that enables high-quality and geometry-aware novel view synthesis.
However, the underlying geometry of the scene is not explicitly constrained
during training, thus leading to noisy and incorrect results when extracting a
mesh with marching cubes. To this end, we propose to leverage pixelwise depths
and normals from a classical 3D reconstruction pipeline as geometric priors to
guide NeRF optimization. Such priors are used as pseudo-ground truth during
training in order to improve the quality of the estimated underlying surface.
Moreover, each pixel is weighted by a confidence value based on the
forward-backward reprojection error for additional robustness. Experimental
results on real-world data demonstrate the effectiveness of this approach in
obtaining clean 3D meshes from images, while maintaining competitive
performances in novel view synthesis.
Related papers
- PlaNeRF: SVD Unsupervised 3D Plane Regularization for NeRF Large-Scale
Scene Reconstruction [2.2369578015657954]
Neural Radiance Fields (NeRF) enable 3D scene reconstruction from 2D images and camera poses for Novel View Synthesis (NVS)
NeRF often suffers from overfitting to training views, leading to poor geometry reconstruction.
We propose a new method to improve NeRF's 3D structure using only RGB images and semantic maps.
arXiv Detail & Related papers (2023-05-26T13:26:46Z) - Improving Neural Radiance Fields with Depth-aware Optimization for Novel
View Synthesis [12.3338393483795]
We propose SfMNeRF, a method to better synthesize novel views as well as reconstruct the 3D-scene geometry.
SfMNeRF employs the epipolar, photometric consistency, depth smoothness, and position-of-matches constraints to explicitly reconstruct the 3D-scene structure.
Experiments on two public datasets demonstrate that SfMNeRF surpasses state-of-the-art approaches.
arXiv Detail & Related papers (2023-04-11T13:37:17Z) - NeRFMeshing: Distilling Neural Radiance Fields into
Geometrically-Accurate 3D Meshes [56.31855837632735]
We propose a compact and flexible architecture that enables easy 3D surface reconstruction from any NeRF-driven approach.
Our final 3D mesh is physically accurate and can be rendered in real time on an array of devices.
arXiv Detail & Related papers (2023-03-16T16:06:03Z) - Delicate Textured Mesh Recovery from NeRF via Adaptive Surface
Refinement [78.48648360358193]
We present a novel framework that generates textured surface meshes from images.
Our approach begins by efficiently initializing the geometry and view-dependency appearance with a NeRF.
We jointly refine the appearance with geometry and bake it into texture images for real-time rendering.
arXiv Detail & Related papers (2023-03-03T17:14:44Z) - High-fidelity 3D GAN Inversion by Pseudo-multi-view Optimization [51.878078860524795]
We present a high-fidelity 3D generative adversarial network (GAN) inversion framework that can synthesize photo-realistic novel views.
Our approach enables high-fidelity 3D rendering from a single image, which is promising for various applications of AI-generated 3D content.
arXiv Detail & Related papers (2022-11-28T18:59:52Z) - IRON: Inverse Rendering by Optimizing Neural SDFs and Materials from
Photometric Images [52.021529273866896]
We propose a neural inverse rendering pipeline called IRON that operates on photometric images and outputs high-quality 3D content.
Our method adopts neural representations for geometry as signed distance fields (SDFs) and materials during optimization to enjoy their flexibility and compactness.
We show that our IRON achieves significantly better inverse rendering quality compared to prior works.
arXiv Detail & Related papers (2022-04-05T14:14:18Z) - Learnable Triangulation for Deep Learning-based 3D Reconstruction of
Objects of Arbitrary Topology from Single RGB Images [12.693545159861857]
We propose a novel deep reinforcement learning-based approach for 3D object reconstruction from monocular images.
The proposed method outperforms the state-of-the-art in terms of visual quality, reconstruction accuracy, and computational time.
arXiv Detail & Related papers (2021-09-24T09:44:22Z) - Fast-GANFIT: Generative Adversarial Network for High Fidelity 3D Face
Reconstruction [76.1612334630256]
We harness the power of Generative Adversarial Networks (GANs) and Deep Convolutional Neural Networks (DCNNs) to reconstruct the facial texture and shape from single images.
We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, facial texture reconstruction with high-frequency details.
arXiv Detail & Related papers (2021-05-16T16:35:44Z) - Hybrid Approach for 3D Head Reconstruction: Using Neural Networks and
Visual Geometry [3.970492757288025]
We present a novel method for reconstructing 3D heads from a single or multiple image(s) using a hybrid approach based on deep learning and geometric techniques.
We propose an encoder-decoder network based on the U-net architecture and trained on synthetic data only.
arXiv Detail & Related papers (2021-04-28T11:31:35Z) - Learning Deformable Tetrahedral Meshes for 3D Reconstruction [78.0514377738632]
3D shape representations that accommodate learning-based 3D reconstruction are an open problem in machine learning and computer graphics.
Previous work on neural 3D reconstruction demonstrated benefits, but also limitations, of point cloud, voxel, surface mesh, and implicit function representations.
We introduce Deformable Tetrahedral Meshes (DefTet) as a particular parameterization that utilizes volumetric tetrahedral meshes for the reconstruction problem.
arXiv Detail & Related papers (2020-11-03T02:57:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.