NeurAR: Neural Uncertainty for Autonomous 3D Reconstruction
- URL: http://arxiv.org/abs/2207.10985v1
- Date: Fri, 22 Jul 2022 10:05:36 GMT
- Title: NeurAR: Neural Uncertainty for Autonomous 3D Reconstruction
- Authors: Yunlong Ran, Jing Zeng, Shibo He, Lincheng Li, Yingfeng Chen, Gimhee
Lee, Jiming Chen, Qi Ye
- Abstract summary: Implicit neural representations have shown compelling results in offline 3D reconstruction and also recently demonstrated the potential for online SLAM systems.
This paper addresses two key challenges: 1) seeking a criterion to measure the quality of the candidate viewpoints for the view planning based on the new representations, and 2) learning the criterion from data that can generalize to different scenes instead of hand-crafting one.
Our method demonstrates significant improvements on various metrics for the rendered image quality and the geometry quality of the reconstructed 3D models when compared with variants using TSDF or reconstruction without view planning.
- Score: 64.36535692191343
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit neural representations have shown compelling results in offline 3D
reconstruction and also recently demonstrated the potential for online SLAM
systems. However, applying them to autonomous 3D reconstruction, where robots
are required to explore a scene and plan a view path for the reconstruction,
has not been studied. In this paper, we explore for the first time the
possibility of using implicit neural representations for autonomous 3D scene
reconstruction by addressing two key challenges: 1) seeking a criterion to
measure the quality of the candidate viewpoints for the view planning based on
the new representations, and 2) learning the criterion from data that can
generalize to different scenes instead of hand-crafting one. For the first
challenge, a proxy of Peak Signal-to-Noise Ratio (PSNR) is proposed to quantify
a viewpoint quality. The proxy is acquired by treating the color of a spatial
point in a scene as a random variable under a Gaussian distribution rather than
a deterministic one; the variance of the distribution quantifies the
uncertainty of the reconstruction and composes the proxy. For the second
challenge, the proxy is optimized jointly with the parameters of an implicit
neural network for the scene. With the proposed view quality criterion, we can
then apply the new representations to autonomous 3D reconstruction. Our method
demonstrates significant improvements on various metrics for the rendered image
quality and the geometry quality of the reconstructed 3D models when compared
with variants using TSDF or reconstruction without view planning.
Related papers
- Frequency-based View Selection in Gaussian Splatting Reconstruction [9.603843571051744]
We investigate the problem of active view selection to perform 3D Gaussian Splatting reconstructions with as few input images as possible.
By ranking the potential views in the frequency domain, we are able to effectively estimate the potential information gain of new viewpoints.
Our method achieves state-of-the-art results in view selection, demonstrating its potential for efficient image-based 3D reconstruction.
arXiv Detail & Related papers (2024-09-24T21:44:26Z) - Improving Neural Indoor Surface Reconstruction with Mask-Guided Adaptive
Consistency Constraints [0.6749750044497732]
We propose a two-stage training process, decouple view-dependent and view-independent colors, and leverage two novel consistency constraints to enhance detail reconstruction performance without requiring extra priors.
Experiments on synthetic and real-world datasets show the capability of reducing the interference from prior estimation errors.
arXiv Detail & Related papers (2023-09-18T13:05:23Z) - Uncertainty Guided Policy for Active Robotic 3D Reconstruction using
Neural Radiance Fields [82.21033337949757]
This paper introduces a ray-based volumetric uncertainty estimator, which computes the entropy of the weight distribution of the color samples along each ray of the object's implicit neural representation.
We show that it is possible to infer the uncertainty of the underlying 3D geometry given a novel view with the proposed estimator.
We present a next-best-view selection policy guided by the ray-based volumetric uncertainty in neural radiance fields-based representations.
arXiv Detail & Related papers (2022-09-17T21:28:57Z) - 2D GANs Meet Unsupervised Single-view 3D Reconstruction [21.93671761497348]
controllable image generation based on pre-trained GANs can benefit a wide range of computer vision tasks.
We propose a novel image-conditioned neural implicit field, which can leverage 2D supervisions from GAN-generated multi-view images.
The effectiveness of our approach is demonstrated through superior single-view 3D reconstruction results of generic objects.
arXiv Detail & Related papers (2022-07-20T20:24:07Z) - MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface
Reconstruction [72.05649682685197]
State-of-the-art neural implicit methods allow for high-quality reconstructions of simple scenes from many input views.
This is caused primarily by the inherent ambiguity in the RGB reconstruction loss that does not provide enough constraints.
Motivated by recent advances in the area of monocular geometry prediction, we explore the utility these cues provide for improving neural implicit surface reconstruction.
arXiv Detail & Related papers (2022-06-01T17:58:15Z) - Multi-initialization Optimization Network for Accurate 3D Human Pose and
Shape Estimation [75.44912541912252]
We propose a three-stage framework named Multi-Initialization Optimization Network (MION)
In the first stage, we strategically select different coarse 3D reconstruction candidates which are compatible with the 2D keypoints of input sample.
In the second stage, we design a mesh refinement transformer (MRT) to respectively refine each coarse reconstruction result via a self-attention mechanism.
Finally, a Consistency Estimation Network (CEN) is proposed to find the best result from mutiple candidates by evaluating if the visual evidence in RGB image matches a given 3D reconstruction.
arXiv Detail & Related papers (2021-12-24T02:43:58Z) - Next-best-view Regression using a 3D Convolutional Neural Network [0.9449650062296823]
We propose a data-driven approach to address the next-best-view problem.
The proposed approach trains a 3D convolutional neural network with previous reconstructions in order to regress the btxtposition of the next-best-view.
We have validated the proposed approach making use of two groups of experiments.
arXiv Detail & Related papers (2021-01-23T01:50:26Z) - Iterative Optimisation with an Innovation CNN for Pose Refinement [17.752556490937092]
In this work we propose an approach, namely an Innovation CNN, to object pose estimation refinement.
Our approach improves initial pose estimation progressively by applying the Innovation CNN iteratively in a gradient descent framework.
We evaluate our method on the popular LINEMOD and Occlusion LINEMOD datasets and obtain state-of-the-art performance on both datasets.
arXiv Detail & Related papers (2021-01-22T00:12:12Z) - Neural Descent for Visual 3D Human Pose and Shape [67.01050349629053]
We present deep neural network methodology to reconstruct the 3d pose and shape of people, given an input RGB image.
We rely on a recently introduced, expressivefull body statistical 3d human model, GHUM, trained end-to-end.
Central to our methodology, is a learning to learn and optimize approach, referred to as HUmanNeural Descent (HUND), which avoids both second-order differentiation.
arXiv Detail & Related papers (2020-08-16T13:38:41Z) - PaMIR: Parametric Model-Conditioned Implicit Representation for
Image-based Human Reconstruction [67.08350202974434]
We propose Parametric Model-Conditioned Implicit Representation (PaMIR), which combines the parametric body model with the free-form deep implicit function.
We show that our method achieves state-of-the-art performance for image-based 3D human reconstruction in the cases of challenging poses and clothing types.
arXiv Detail & Related papers (2020-07-08T02:26:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.