ERF: Explicit Radiance Field Reconstruction From Scratch
- URL: http://arxiv.org/abs/2203.00051v1
- Date: Mon, 28 Feb 2022 19:37:12 GMT
- Title: ERF: Explicit Radiance Field Reconstruction From Scratch
- Authors: Samir Aroudj and Steven Lovegrove and Eddy Ilg and Tanner Schmidt and
Michael Goesele and Richard Newcombe
- Abstract summary: We propose a novel explicit dense 3D reconstruction approach that processes a set of images of a scene with sensor poses and calibrations and estimates a photo-real digital model.
One of the key innovations is that the underlying volumetric representation is completely explicit.
We show that our method is general and practical. It does not require a highly controlled lab setup for capturing, but allows for reconstructing scenes with a vast variety of objects.
- Score: 12.254150867994163
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel explicit dense 3D reconstruction approach that processes a
set of images of a scene with sensor poses and calibrations and estimates a
photo-real digital model. One of the key innovations is that the underlying
volumetric representation is completely explicit in contrast to neural
network-based (implicit) alternatives. We encode scenes explicitly using clear
and understandable mappings of optimization variables to scene geometry and
their outgoing surface radiance. We represent them using hierarchical
volumetric fields stored in a sparse voxel octree. Robustly reconstructing such
a volumetric scene model with millions of unknown variables from registered
scene images only is a highly non-convex and complex optimization problem. To
this end, we employ stochastic gradient descent (Adam) which is steered by an
inverse differentiable renderer.
We demonstrate that our method can reconstruct models of high quality that
are comparable to state-of-the-art implicit methods. Importantly, we do not use
a sequential reconstruction pipeline where individual steps suffer from
incomplete or unreliable information from previous stages, but start our
optimizations from uniformed initial solutions with scene geometry and radiance
that is far off from the ground truth. We show that our method is general and
practical. It does not require a highly controlled lab setup for capturing, but
allows for reconstructing scenes with a vast variety of objects, including
challenging ones, such as outdoor plants or furry toys. Finally, our
reconstructed scene models are versatile thanks to their explicit design. They
can be edited interactively which is computationally too costly for implicit
alternatives.
Related papers
- No Pose, No Problem: Surprisingly Simple 3D Gaussian Splats from Sparse Unposed Images [100.80376573969045]
NoPoSplat is a feed-forward model capable of reconstructing 3D scenes parameterized by 3D Gaussians from multi-view images.
Our model achieves real-time 3D Gaussian reconstruction during inference.
This work makes significant advances in pose-free generalizable 3D reconstruction and demonstrates its applicability to real-world scenarios.
arXiv Detail & Related papers (2024-10-31T17:58:22Z) - FrozenRecon: Pose-free 3D Scene Reconstruction with Frozen Depth Models [67.96827539201071]
We propose a novel test-time optimization approach for 3D scene reconstruction.
Our method achieves state-of-the-art cross-dataset reconstruction on five zero-shot testing datasets.
arXiv Detail & Related papers (2023-08-10T17:55:02Z) - Differentiable Blocks World: Qualitative 3D Decomposition by Rendering
Primitives [70.32817882783608]
We present an approach that produces a simple, compact, and actionable 3D world representation by means of 3D primitives.
Unlike existing primitive decomposition methods that rely on 3D input data, our approach operates directly on images.
We show that the resulting textured primitives faithfully reconstruct the input images and accurately model the visible 3D points.
arXiv Detail & Related papers (2023-07-11T17:58:31Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Shape, Pose, and Appearance from a Single Image via Bootstrapped
Radiance Field Inversion [54.151979979158085]
We introduce a principled end-to-end reconstruction framework for natural images, where accurate ground-truth poses are not available.
We leverage an unconditional 3D-aware generator, to which we apply a hybrid inversion scheme where a model produces a first guess of the solution.
Our framework can de-render an image in as few as 10 steps, enabling its use in practical scenarios.
arXiv Detail & Related papers (2022-11-21T17:42:42Z) - Generalizable Patch-Based Neural Rendering [46.41746536545268]
We propose a new paradigm for learning models that can synthesize novel views of unseen scenes.
Our method is capable of predicting the color of a target ray in a novel scene directly, just from a collection of patches sampled from the scene.
We show that our approach outperforms the state-of-the-art on novel view synthesis of unseen scenes even when being trained with considerably less data than prior work.
arXiv Detail & Related papers (2022-07-21T17:57:04Z) - ARF: Artistic Radiance Fields [63.79314417413371]
We present a method for transferring the artistic features of an arbitrary style image to a 3D scene.
Previous methods that perform 3D stylization on point clouds or meshes are sensitive to geometric reconstruction errors.
We propose to stylize the more robust radiance field representation.
arXiv Detail & Related papers (2022-06-13T17:55:31Z) - PERF: Performant, Explicit Radiance Fields [1.933681537640272]
We present a novel way of approaching image-based 3D reconstruction based on radiance fields.
The problem of volumetric reconstruction is formulated as a non-linear least-squares problem and solved explicitly without the use of neural networks.
arXiv Detail & Related papers (2021-12-10T15:29:00Z) - GeoNeRF: Generalizing NeRF with Geometry Priors [2.578242050187029]
We present GeoNeRF, a generalizable photorealistic novel view method based on neural radiance fields.
Our approach consists of two main stages: a geometry reasoner and a synthesis.
Experiments show that GeoNeRF outperforms state-of-the-art generalizable neural rendering models on various synthetic and real datasets.
arXiv Detail & Related papers (2021-11-26T15:15:37Z) - Shape From Tracing: Towards Reconstructing 3D Object Geometry and SVBRDF
Material from Images via Differentiable Path Tracing [16.975014467319443]
Differentiable path tracing is an appealing framework as it can reproduce complex appearance effects.
We show how to use differentiable ray tracing to refine an initial coarse mesh and per-mesh-facet material representation.
We also show how to refine initial reconstructions of real-world objects in unconstrained environments.
arXiv Detail & Related papers (2020-12-06T18:55:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.