Reducing Shape-Radiance Ambiguity in Radiance Fields with a Closed-Form
Color Estimation Method
- URL: http://arxiv.org/abs/2312.12726v1
- Date: Wed, 20 Dec 2023 02:50:03 GMT
- Title: Reducing Shape-Radiance Ambiguity in Radiance Fields with a Closed-Form
Color Estimation Method
- Authors: Qihang Fang, Yafei Song, Keqiang Li, Liefeng Bo
- Abstract summary: We propose a more adaptive method to reduce the shape-radiance ambiguity.
We first estimate the color field based on the density field and posed images in a closed form.
Experimental results show that our method improves the density field of NeRF both qualitatively and quantitatively.
- Score: 24.44659061093503
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural radiance field (NeRF) enables the synthesis of cutting-edge realistic
novel view images of a 3D scene. It includes density and color fields to model
the shape and radiance of a scene, respectively. Supervised by the photometric
loss in an end-to-end training manner, NeRF inherently suffers from the
shape-radiance ambiguity problem, i.e., it can perfectly fit training views but
does not guarantee decoupling the two fields correctly. To deal with this
issue, existing works have incorporated prior knowledge to provide an
independent supervision signal for the density field, including total variation
loss, sparsity loss, distortion loss, etc. These losses are based on general
assumptions about the density field, e.g., it should be smooth, sparse, or
compact, which are not adaptive to a specific scene. In this paper, we propose
a more adaptive method to reduce the shape-radiance ambiguity. The key is a
rendering method that is only based on the density field. Specifically, we
first estimate the color field based on the density field and posed images in a
closed form. Then NeRF's rendering process can proceed. We address the problems
in estimating the color field, including occlusion and non-uniformly
distributed views. Afterward, it is applied to regularize NeRF's density field.
As our regularization is guided by photometric loss, it is more adaptive
compared to existing ones. Experimental results show that our method improves
the density field of NeRF both qualitatively and quantitatively. Our code is
available at https://github.com/qihangGH/Closed-form-color-field.
Related papers
- Taming Latent Diffusion Model for Neural Radiance Field Inpainting [63.297262813285265]
Neural Radiance Field (NeRF) is a representation for 3D reconstruction from multi-view images.
We propose tempering the diffusion model'sity with per-scene customization and mitigating the textural shift with masked training.
Our framework yields state-of-the-art NeRF inpainting results on various real-world scenes.
arXiv Detail & Related papers (2024-04-15T17:59:57Z) - CVT-xRF: Contrastive In-Voxel Transformer for 3D Consistent Radiance Fields from Sparse Inputs [65.80187860906115]
We propose a novel approach to improve NeRF's performance with sparse inputs.
We first adopt a voxel-based ray sampling strategy to ensure that the sampled rays intersect with a certain voxel in 3D space.
We then randomly sample additional points within the voxel and apply a Transformer to infer the properties of other points on each ray, which are then incorporated into the volume rendering.
arXiv Detail & Related papers (2024-03-25T15:56:17Z) - Aleth-NeRF: Illumination Adaptive NeRF with Concealing Field Assumption [65.96818069005145]
We introduce the concept of a "Concealing Field," which assigns transmittance values to the surrounding air to account for illumination effects.
In dark scenarios, we assume that object emissions maintain a standard lighting level but are attenuated as they traverse the air during the rendering process.
We present a comprehensive multi-view dataset captured under challenging illumination conditions for evaluation.
arXiv Detail & Related papers (2023-12-14T16:24:09Z) - Evaluate Geometry of Radiance Fields with Low-frequency Color Prior [27.741607821885673]
A radiance field is an effective representation of 3D scenes, which has been widely adopted in novel-view synthesis and 3D reconstruction.
It is still an open and challenging problem to evaluate the geometry, i.e., the density field, as the ground-truth is almost impossible to obtain.
We propose a novel metric, named Inverse Mean Residual Color (IMRC), which can evaluate the geometry only with the observation images.
arXiv Detail & Related papers (2023-04-10T02:02:57Z) - DiffusioNeRF: Regularizing Neural Radiance Fields with Denoising
Diffusion Models [5.255302402546892]
We learn a prior over scene geometry and color, using a denoising diffusion model (DDM)
We show that, these gradients of logarithms of RGBD patch priors serve to regularize geometry and color of a scene.
Evaluations on LLFF, the most relevant dataset, show that our learned prior achieves improved quality in the reconstructed geometry and improved to novel views.
arXiv Detail & Related papers (2023-02-23T18:52:28Z) - Behind the Scenes: Density Fields for Single View Reconstruction [63.40484647325238]
Inferring meaningful geometric scene representation from a single image is a fundamental problem in computer vision.
We propose to predict implicit density fields. A density field maps every location in the frustum of the input image to volumetric density.
We show that our method is able to predict meaningful geometry for regions that are occluded in the input image.
arXiv Detail & Related papers (2023-01-18T17:24:01Z) - Neural Density-Distance Fields [9.742650275132029]
This paper proposes Neural Density-Distance Field (NeDDF), a novel 3D representation that reciprocally constrains the distance and density fields.
We extend distance field formulation to shapes with no explicit boundary surface, such as fur or smoke, which enable explicit conversion from distance field to density field.
Experiments show that NeDDF can achieve high localization performance while providing comparable results to NeRF on novel view synthesis.
arXiv Detail & Related papers (2022-07-29T03:13:25Z) - Non-line-of-Sight Imaging via Neural Transient Fields [52.91826472034646]
We present a neural modeling framework for Non-Line-of-Sight (NLOS) imaging.
Inspired by the recent Neural Radiance Field (NeRF) approach, we use a multi-layer perceptron (MLP) to represent the neural transient field or NeTF.
We formulate a spherical volume NeTF reconstruction pipeline, applicable to both confocal and non-confocal setups.
arXiv Detail & Related papers (2021-01-02T05:20:54Z) - NeRF++: Analyzing and Improving Neural Radiance Fields [117.73411181186088]
Neural Radiance Fields (NeRF) achieve impressive view synthesis results for a variety of capture settings.
NeRF fits multi-layer perceptrons representing view-invariant opacity and view-dependent color volumes to a set of training images.
We address a parametrization issue involved in applying NeRF to 360 captures of objects within large-scale, 3D scenes.
arXiv Detail & Related papers (2020-10-15T03:24:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.