PERF: Performant, Explicit Radiance Fields
- URL: http://arxiv.org/abs/2112.05598v1
- Date: Fri, 10 Dec 2021 15:29:00 GMT
- Title: PERF: Performant, Explicit Radiance Fields
- Authors: Sverker Rasmuson, Erik Sintorn, Ulf Assarsson
- Abstract summary: We present a novel way of approaching image-based 3D reconstruction based on radiance fields.
The problem of volumetric reconstruction is formulated as a non-linear least-squares problem and solved explicitly without the use of neural networks.
- Score: 1.933681537640272
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel way of approaching image-based 3D reconstruction based on
radiance fields. The problem of volumetric reconstruction is formulated as a
non-linear least-squares problem and solved explicitly without the use of
neural networks. This enables the use of solvers with a higher rate of
convergence than what is typically used for neural networks, and fewer
iterations are required until convergence. The volume is represented using a
grid of voxels, with the scene surrounded by a hierarchy of environment maps.
This makes it possible to get clean reconstructions of 360{\deg} scenes where
the foreground and background is separated. A number of synthetic and real
scenes from well known benchmark-suites are successfully reconstructed with
quality on par with state-of-the-art methods, but at significantly reduced
reconstruction times.
Related papers
- DGTR: Distributed Gaussian Turbo-Reconstruction for Sparse-View Vast Scenes [81.56206845824572]
Novel-view synthesis (NVS) approaches play a critical role in vast scene reconstruction.
Few-shot methods often struggle with poor reconstruction quality in vast environments.
This paper presents DGTR, a novel distributed framework for efficient Gaussian reconstruction for sparse-view vast scenes.
arXiv Detail & Related papers (2024-11-19T07:51:44Z) - DistGrid: Scalable Scene Reconstruction with Distributed Multi-resolution Hash Grid [10.458776364195796]
We propose a scalable scene reconstruction method based on joint Multi-resolution Hash Grids, named DistGrid.
Our method outperforms existing methods on all evaluated large-scale scenes, and provides visually plausible scene reconstruction.
arXiv Detail & Related papers (2024-05-07T15:41:20Z) - 3D Reconstruction with Generalizable Neural Fields using Scene Priors [71.37871576124789]
We introduce training generalizable Neural Fields incorporating scene Priors (NFPs)
The NFP network maps any single-view RGB-D image into signed distance and radiance values.
A complete scene can be reconstructed by merging individual frames in the volumetric space WITHOUT a fusion module.
arXiv Detail & Related papers (2023-09-26T18:01:02Z) - BundleRecon: Ray Bundle-Based 3D Neural Reconstruction [9.478278728273336]
We propose an enhanced model called BundleRecon for neural implicit multi-view reconstruction.
In the existing approaches, sampling is performed by a single ray that corresponds to a single pixel.
In contrast, our model samples a patch of pixels using a bundle of rays, which incorporates information from neighboring pixels.
arXiv Detail & Related papers (2023-05-12T09:39:08Z) - VolRecon: Volume Rendering of Signed Ray Distance Functions for
Generalizable Multi-View Reconstruction [64.09702079593372]
VolRecon is a novel generalizable implicit reconstruction method with Signed Ray Distance Function (SRDF)
On DTU dataset, VolRecon outperforms SparseNeuS by about 30% in sparse view reconstruction and achieves comparable accuracy as MVSNet in full view reconstruction.
arXiv Detail & Related papers (2022-12-15T18:59:54Z) - Learning Neural Radiance Fields from Multi-View Geometry [1.1011268090482573]
We present a framework, called MVG-NeRF, that combines Multi-View Geometry algorithms and Neural Radiance Fields (NeRF) for image-based 3D reconstruction.
NeRF has revolutionized the field of implicit 3D representations, mainly due to a differentiable rendering formulation that enables high-quality and geometry-aware novel view synthesis.
arXiv Detail & Related papers (2022-10-24T08:53:35Z) - ERF: Explicit Radiance Field Reconstruction From Scratch [12.254150867994163]
We propose a novel explicit dense 3D reconstruction approach that processes a set of images of a scene with sensor poses and calibrations and estimates a photo-real digital model.
One of the key innovations is that the underlying volumetric representation is completely explicit.
We show that our method is general and practical. It does not require a highly controlled lab setup for capturing, but allows for reconstructing scenes with a vast variety of objects.
arXiv Detail & Related papers (2022-02-28T19:37:12Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z) - Non-line-of-Sight Imaging via Neural Transient Fields [52.91826472034646]
We present a neural modeling framework for Non-Line-of-Sight (NLOS) imaging.
Inspired by the recent Neural Radiance Field (NeRF) approach, we use a multi-layer perceptron (MLP) to represent the neural transient field or NeTF.
We formulate a spherical volume NeTF reconstruction pipeline, applicable to both confocal and non-confocal setups.
arXiv Detail & Related papers (2021-01-02T05:20:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.