NeuRIS: Neural Reconstruction of Indoor Scenes Using Normal Priors
- URL: http://arxiv.org/abs/2206.13597v1
- Date: Mon, 27 Jun 2022 19:22:03 GMT
- Title: NeuRIS: Neural Reconstruction of Indoor Scenes Using Normal Priors
- Authors: Jiepeng Wang, Peng Wang, Xiaoxiao Long, Christian Theobalt, Taku
Komura, Lingjie Liu, Wenping Wang
- Abstract summary: We propose a new method, named NeuRIS, for high quality reconstruction of indoor scenes.
NeuRIS integrates estimated normal of indoor scenes as a prior in a neural rendering framework.
Experiments show that NeuRIS significantly outperforms the state-of-the-art methods in terms of reconstruction quality.
- Score: 84.66706400428303
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Reconstructing 3D indoor scenes from 2D images is an important task in many
computer vision and graphics applications. A main challenge in this task is
that large texture-less areas in typical indoor scenes make existing methods
struggle to produce satisfactory reconstruction results. We propose a new
method, named NeuRIS, for high quality reconstruction of indoor scenes. The key
idea of NeuRIS is to integrate estimated normal of indoor scenes as a prior in
a neural rendering framework for reconstructing large texture-less shapes and,
importantly, to do this in an adaptive manner to also enable the reconstruction
of irregular shapes with fine details. Specifically, we evaluate the
faithfulness of the normal priors on-the-fly by checking the multi-view
consistency of reconstruction during the optimization process. Only the normal
priors accepted as faithful will be utilized for 3D reconstruction, which
typically happens in the regions of smooth shapes possibly with weak texture.
However, for those regions with small objects or thin structures, for which the
normal priors are usually unreliable, we will only rely on visual features of
the input images, since such regions typically contain relatively rich visual
features (e.g., shade changes and boundary contours). Extensive experiments
show that NeuRIS significantly outperforms the state-of-the-art methods in
terms of reconstruction quality.
Related papers
- ReconFusion: 3D Reconstruction with Diffusion Priors [104.73604630145847]
We present ReconFusion to reconstruct real-world scenes using only a few photos.
Our approach leverages a diffusion prior for novel view synthesis, trained on synthetic and multiview datasets.
Our method synthesizes realistic geometry and texture in underconstrained regions while preserving the appearance of observed regions.
arXiv Detail & Related papers (2023-12-05T18:59:58Z) - Indoor Scene Reconstruction with Fine-Grained Details Using Hybrid Representation and Normal Prior Enhancement [50.56517624931987]
The reconstruction of indoor scenes from multi-view RGB images is challenging due to the coexistence of flat and texture-less regions.
Recent methods leverage neural radiance fields aided by predicted surface normal priors to recover the scene geometry.
This work aims to reconstruct high-fidelity surfaces with fine-grained details by addressing the above limitations.
arXiv Detail & Related papers (2023-09-14T12:05:29Z) - Exploiting Multiple Priors for Neural 3D Indoor Reconstruction [15.282699095607594]
We propose a novel neural implicit modeling method that leverages multiple regularization strategies to achieve better reconstructions of large indoor environments.
Experimental results show that our approach produces state-of-the-art 3D reconstructions in challenging indoor scenarios.
arXiv Detail & Related papers (2023-09-13T15:23:43Z) - PlaNeRF: SVD Unsupervised 3D Plane Regularization for NeRF Large-Scale
Scene Reconstruction [2.2369578015657954]
Neural Radiance Fields (NeRF) enable 3D scene reconstruction from 2D images and camera poses for Novel View Synthesis (NVS)
NeRF often suffers from overfitting to training views, leading to poor geometry reconstruction.
We propose a new method to improve NeRF's 3D structure using only RGB images and semantic maps.
arXiv Detail & Related papers (2023-05-26T13:26:46Z) - MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface
Reconstruction [72.05649682685197]
State-of-the-art neural implicit methods allow for high-quality reconstructions of simple scenes from many input views.
This is caused primarily by the inherent ambiguity in the RGB reconstruction loss that does not provide enough constraints.
Motivated by recent advances in the area of monocular geometry prediction, we explore the utility these cues provide for improving neural implicit surface reconstruction.
arXiv Detail & Related papers (2022-06-01T17:58:15Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z) - Fast-GANFIT: Generative Adversarial Network for High Fidelity 3D Face
Reconstruction [76.1612334630256]
We harness the power of Generative Adversarial Networks (GANs) and Deep Convolutional Neural Networks (DCNNs) to reconstruct the facial texture and shape from single images.
We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, facial texture reconstruction with high-frequency details.
arXiv Detail & Related papers (2021-05-16T16:35:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.