Evaluate Geometry of Radiance Fields with Low-frequency Color Prior
- URL: http://arxiv.org/abs/2304.04351v2
- Date: Wed, 17 Jan 2024 07:20:30 GMT
- Title: Evaluate Geometry of Radiance Fields with Low-frequency Color Prior
- Authors: Qihang Fang, Yafei Song, Keqiang Li, Li Shen, Huaiyu Wu, Gang Xiong,
Liefeng Bo
- Abstract summary: A radiance field is an effective representation of 3D scenes, which has been widely adopted in novel-view synthesis and 3D reconstruction.
It is still an open and challenging problem to evaluate the geometry, i.e., the density field, as the ground-truth is almost impossible to obtain.
We propose a novel metric, named Inverse Mean Residual Color (IMRC), which can evaluate the geometry only with the observation images.
- Score: 27.741607821885673
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A radiance field is an effective representation of 3D scenes, which has been
widely adopted in novel-view synthesis and 3D reconstruction. It is still an
open and challenging problem to evaluate the geometry, i.e., the density field,
as the ground-truth is almost impossible to obtain. One alternative indirect
solution is to transform the density field into a point-cloud and compute its
Chamfer Distance with the scanned ground-truth. However, many widely-used
datasets have no point-cloud ground-truth since the scanning process along with
the equipment is expensive and complicated. To this end, we propose a novel
metric, named Inverse Mean Residual Color (IMRC), which can evaluate the
geometry only with the observation images. Our key insight is that the better
the geometry, the lower-frequency the computed color field. From this insight,
given a reconstructed density field and observation images, we design a
closed-form method to approximate the color field with low-frequency spherical
harmonics, and compute the inverse mean residual color. Then the higher the
IMRC, the better the geometry. Qualitative and quantitative experimental
results verify the effectiveness of our proposed IMRC metric. We also benchmark
several state-of-the-art methods using IMRC to promote future related research.
Our code is available at https://github.com/qihangGH/IMRC.
Related papers
- Q-SLAM: Quadric Representations for Monocular SLAM [89.05457684629621]
Monocular SLAM has long grappled with the challenge of accurately modeling 3D geometries.
Recent advances in Neural Radiance Fields (NeRF)-based monocular SLAM have shown promise.
We propose a novel approach that reimagines volumetric representations through the lens of quadric forms.
arXiv Detail & Related papers (2024-03-12T23:27:30Z) - Reducing Shape-Radiance Ambiguity in Radiance Fields with a Closed-Form
Color Estimation Method [24.44659061093503]
We propose a more adaptive method to reduce the shape-radiance ambiguity.
We first estimate the color field based on the density field and posed images in a closed form.
Experimental results show that our method improves the density field of NeRF both qualitatively and quantitatively.
arXiv Detail & Related papers (2023-12-20T02:50:03Z) - 3D Density-Gradient based Edge Detection on Neural Radiance Fields
(NeRFs) for Geometric Reconstruction [0.0]
We show how to generate geometric 3D reconstructions from Neural Radiance Fields (NeRFs) using density gradients and edge detection filters.
Our approach demonstrates the capability to achieve geometric 3D reconstructions with high geometric accuracy on object surfaces and remarkable object completeness.
arXiv Detail & Related papers (2023-09-26T09:56:27Z) - $PC^2$: Projection-Conditioned Point Cloud Diffusion for Single-Image 3D
Reconstruction [97.06927852165464]
Reconstructing the 3D shape of an object from a single RGB image is a long-standing and highly challenging problem in computer vision.
We propose a novel method for single-image 3D reconstruction which generates a sparse point cloud via a conditional denoising diffusion process.
arXiv Detail & Related papers (2023-02-21T13:37:07Z) - Behind the Scenes: Density Fields for Single View Reconstruction [63.40484647325238]
Inferring meaningful geometric scene representation from a single image is a fundamental problem in computer vision.
We propose to predict implicit density fields. A density field maps every location in the frustum of the input image to volumetric density.
We show that our method is able to predict meaningful geometry for regions that are occluded in the input image.
arXiv Detail & Related papers (2023-01-18T17:24:01Z) - Multi-View Reconstruction using Signed Ray Distance Functions (SRDF) [22.75986869918975]
We investigate a new computational approach that builds on a novel shape representation that is volumetric.
The shape energy associated to this representation evaluates 3D geometry given color images and does not need appearance prediction.
In practice we propose an implicit shape representation, the SRDF, based on signed distances which we parameterize by depths along camera rays.
arXiv Detail & Related papers (2022-08-31T19:32:17Z) - Neural Density-Distance Fields [9.742650275132029]
This paper proposes Neural Density-Distance Field (NeDDF), a novel 3D representation that reciprocally constrains the distance and density fields.
We extend distance field formulation to shapes with no explicit boundary surface, such as fur or smoke, which enable explicit conversion from distance field to density field.
Experiments show that NeDDF can achieve high localization performance while providing comparable results to NeRF on novel view synthesis.
arXiv Detail & Related papers (2022-07-29T03:13:25Z) - VoGE: A Differentiable Volume Renderer using Gaussian Ellipsoids for
Analysis-by-Synthesis [62.47221232706105]
We propose VoGE, which utilizes the Gaussian reconstruction kernels as volumetric primitives.
To efficiently render via VoGE, we propose an approximate closeform solution for the volume density aggregation and a coarse-to-fine rendering strategy.
VoGE outperforms SoTA when applied to various vision tasks, e.g., object pose estimation, shape/texture fitting, and reasoning.
arXiv Detail & Related papers (2022-05-30T19:52:11Z) - Neural RGB-D Surface Reconstruction [15.438678277705424]
Methods which learn a neural radiance field have shown amazing image synthesis results, but the underlying geometry representation is only a coarse approximation of the real geometry.
We demonstrate how depth measurements can be incorporated into the radiance field formulation to produce more detailed and complete reconstruction results.
arXiv Detail & Related papers (2021-04-09T18:00:01Z) - Refer-it-in-RGBD: A Bottom-up Approach for 3D Visual Grounding in RGBD
Images [69.5662419067878]
Grounding referring expressions in RGBD image has been an emerging field.
We present a novel task of 3D visual grounding in single-view RGBD image where the referred objects are often only partially scanned due to occlusion.
Our approach first fuses the language and the visual features at the bottom level to generate a heatmap that localizes the relevant regions in the RGBD image.
Then our approach conducts an adaptive feature learning based on the heatmap and performs the object-level matching with another visio-linguistic fusion to finally ground the referred object.
arXiv Detail & Related papers (2021-03-14T11:18:50Z) - Geometric Correspondence Fields: Learned Differentiable Rendering for 3D
Pose Refinement in the Wild [96.09941542587865]
We present a novel 3D pose refinement approach based on differentiable rendering for objects of arbitrary categories in the wild.
In this way, we precisely align 3D models to objects in RGB images which results in significantly improved 3D pose estimates.
We evaluate our approach on the challenging Pix3D dataset and achieve up to 55% relative improvement compared to state-of-the-art refinement methods in multiple metrics.
arXiv Detail & Related papers (2020-07-17T12:34:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.