NeRFMeshing: Distilling Neural Radiance Fields into
Geometrically-Accurate 3D Meshes
- URL: http://arxiv.org/abs/2303.09431v1
- Date: Thu, 16 Mar 2023 16:06:03 GMT
- Title: NeRFMeshing: Distilling Neural Radiance Fields into
Geometrically-Accurate 3D Meshes
- Authors: Marie-Julie Rakotosaona, Fabian Manhardt, Diego Martin Arroyo, Michael
Niemeyer, Abhijit Kundu, Federico Tombari
- Abstract summary: We propose a compact and flexible architecture that enables easy 3D surface reconstruction from any NeRF-driven approach.
Our final 3D mesh is physically accurate and can be rendered in real time on an array of devices.
- Score: 56.31855837632735
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the introduction of Neural Radiance Fields (NeRFs), novel view synthesis
has recently made a big leap forward. At the core, NeRF proposes that each 3D
point can emit radiance, allowing to conduct view synthesis using
differentiable volumetric rendering. While neural radiance fields can
accurately represent 3D scenes for computing the image rendering, 3D meshes are
still the main scene representation supported by most computer graphics and
simulation pipelines, enabling tasks such as real time rendering and
physics-based simulations. Obtaining 3D meshes from neural radiance fields
still remains an open challenge since NeRFs are optimized for view synthesis,
not enforcing an accurate underlying geometry on the radiance field. We thus
propose a novel compact and flexible architecture that enables easy 3D surface
reconstruction from any NeRF-driven approach. Upon having trained the radiance
field, we distill the volumetric 3D representation into a Signed Surface
Approximation Network, allowing easy extraction of the 3D mesh and appearance.
Our final 3D mesh is physically accurate and can be rendered in real time on an
array of devices.
Related papers
- Mesh2NeRF: Direct Mesh Supervision for Neural Radiance Field Representation and Generation [51.346733271166926]
Mesh2NeRF is an approach to derive ground-truth radiance fields from textured meshes for 3D generation tasks.
We validate the effectiveness of Mesh2NeRF across various tasks.
arXiv Detail & Related papers (2024-03-28T11:22:53Z) - Denoising Diffusion via Image-Based Rendering [54.20828696348574]
We introduce the first diffusion model able to perform fast, detailed reconstruction and generation of real-world 3D scenes.
First, we introduce a new neural scene representation, IB-planes, that can efficiently and accurately represent large 3D scenes.
Second, we propose a denoising-diffusion framework to learn a prior over this novel 3D scene representation, using only 2D images.
arXiv Detail & Related papers (2024-02-05T19:00:45Z) - BeyondPixels: A Comprehensive Review of the Evolution of Neural Radiance Fields [1.1531932979578041]
NeRF, short for Neural Radiance Fields, is a recent innovation that uses AI algorithms to create 3D objects from 2D images.
This survey reviews recent advances in NeRF and categorizes them according to their architectural designs.
arXiv Detail & Related papers (2023-06-05T16:10:21Z) - Learning Neural Duplex Radiance Fields for Real-Time View Synthesis [33.54507228895688]
We propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations.
We demonstrate the effectiveness and superiority of our approach via extensive experiments on a range of standard datasets.
arXiv Detail & Related papers (2023-04-20T17:59:52Z) - Learning Neural Radiance Fields from Multi-View Geometry [1.1011268090482573]
We present a framework, called MVG-NeRF, that combines Multi-View Geometry algorithms and Neural Radiance Fields (NeRF) for image-based 3D reconstruction.
NeRF has revolutionized the field of implicit 3D representations, mainly due to a differentiable rendering formulation that enables high-quality and geometry-aware novel view synthesis.
arXiv Detail & Related papers (2022-10-24T08:53:35Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z) - 3D-aware Image Synthesis via Learning Structural and Textural
Representations [39.681030539374994]
We propose VolumeGAN, for high-fidelity 3D-aware image synthesis, through explicitly learning a structural representation and a textural representation.
Our approach achieves sufficiently higher image quality and better 3D control than the previous methods.
arXiv Detail & Related papers (2021-12-20T18:59:40Z) - Light Field Networks: Neural Scene Representations with
Single-Evaluation Rendering [60.02806355570514]
Inferring representations of 3D scenes from 2D observations is a fundamental problem of computer graphics, computer vision, and artificial intelligence.
We propose a novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field.
Rendering a ray from an LFN requires only a *single* network evaluation, as opposed to hundreds of evaluations per ray for ray-marching or based on volumetrics.
arXiv Detail & Related papers (2021-06-04T17:54:49Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.