Neural 3D Scene Reconstruction from Multiple 2D Images without 3D
Supervision
- URL: http://arxiv.org/abs/2306.17643v3
- Date: Tue, 4 Jul 2023 03:54:44 GMT
- Title: Neural 3D Scene Reconstruction from Multiple 2D Images without 3D
Supervision
- Authors: Yi Guo, Che Sun, Yunde Jia, and Yuwei Wu
- Abstract summary: We propose a novel neural reconstruction method that reconstructs scenes using sparse depth under the plane constraints without 3D supervision.
We introduce a signed distance function field, a color field, and a probability field to represent a scene.
We optimize these fields to reconstruct the scene by using differentiable ray marching with accessible 2D images as supervision.
- Score: 41.20504333318276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural 3D scene reconstruction methods have achieved impressive performance
when reconstructing complex geometry and low-textured regions in indoor scenes.
However, these methods heavily rely on 3D data which is costly and
time-consuming to obtain in real world. In this paper, we propose a novel
neural reconstruction method that reconstructs scenes using sparse depth under
the plane constraints without 3D supervision. We introduce a signed distance
function field, a color field, and a probability field to represent a scene. We
optimize these fields to reconstruct the scene by using differentiable ray
marching with accessible 2D images as supervision. We improve the
reconstruction quality of complex geometry scene regions with sparse depth
obtained by using the geometric constraints. The geometric constraints project
3D points on the surface to similar-looking regions with similar features in
different 2D images. We impose the plane constraints to make large planes
parallel or vertical to the indoor floor. Both two constraints help reconstruct
accurate and smooth geometry structures of the scene. Without 3D supervision,
our method achieves competitive performance compared with existing methods that
use 3D supervision on the ScanNet dataset.
Related papers
- Invisible Stitch: Generating Smooth 3D Scenes with Depth Inpainting [75.7154104065613]
We introduce a novel depth completion model, trained via teacher distillation and self-training to learn the 3D fusion process.
We also introduce a new benchmarking scheme for scene generation methods that is based on ground truth geometry.
arXiv Detail & Related papers (2024-04-30T17:59:40Z) - Behind the Veil: Enhanced Indoor 3D Scene Reconstruction with Occluded Surfaces Completion [15.444301186927142]
We present a novel indoor 3D reconstruction method with occluded surface completion, given a sequence of depth readings.
Our method tackles the task of completing the occluded scene surfaces, resulting in a complete 3D scene mesh.
We evaluate the proposed method on the 3D Completed Room Scene (3D-CRS) and iTHOR datasets.
arXiv Detail & Related papers (2024-04-03T21:18:27Z) - Denoising Diffusion via Image-Based Rendering [54.20828696348574]
We introduce the first diffusion model able to perform fast, detailed reconstruction and generation of real-world 3D scenes.
First, we introduce a new neural scene representation, IB-planes, that can efficiently and accurately represent large 3D scenes.
Second, we propose a denoising-diffusion framework to learn a prior over this novel 3D scene representation, using only 2D images.
arXiv Detail & Related papers (2024-02-05T19:00:45Z) - Self-Supervised Geometry-Aware Encoder for Style-Based 3D GAN Inversion [115.82306502822412]
StyleGAN has achieved great progress in 2D face reconstruction and semantic editing via image inversion and latent editing.
A corresponding generic 3D GAN inversion framework is still missing, limiting the applications of 3D face reconstruction and semantic editing.
We study the challenging problem of 3D GAN inversion where a latent code is predicted given a single face image to faithfully recover its 3D shapes and detailed textures.
arXiv Detail & Related papers (2022-12-14T18:49:50Z) - Learning 3D Scene Priors with 2D Supervision [37.79852635415233]
We propose a new method to learn 3D scene priors of layout and shape without requiring any 3D ground truth.
Our method represents a 3D scene as a latent vector, from which we can progressively decode to a sequence of objects characterized by their class categories.
Experiments on 3D-FRONT and ScanNet show that our method outperforms state of the art in single-view reconstruction.
arXiv Detail & Related papers (2022-11-25T15:03:32Z) - SimpleRecon: 3D Reconstruction Without 3D Convolutions [21.952478592241]
We show how focusing on high quality multi-view depth prediction leads to highly accurate 3D reconstructions using simple off-the-shelf depth fusion.
Our method achieves a significant lead over the current state-of-the-art for depth estimation and close or better for 3D reconstruction on ScanNet and 7-Scenes.
arXiv Detail & Related papers (2022-08-31T09:46:34Z) - DM-NeRF: 3D Scene Geometry Decomposition and Manipulation from 2D Images [15.712721653893636]
DM-NeRF is among the first to simultaneously reconstruct, decompose, manipulate and render complex 3D scenes in a single pipeline.
Our method can accurately decompose all 3D objects from 2D views, allowing any interested object to be freely manipulated in 3D space.
arXiv Detail & Related papers (2022-08-15T14:32:10Z) - Neural 3D Scene Reconstruction with the Manhattan-world Assumption [58.90559966227361]
This paper addresses the challenge of reconstructing 3D indoor scenes from multi-view images.
Planar constraints can be conveniently integrated into the recent implicit neural representation-based reconstruction methods.
The proposed method outperforms previous methods by a large margin on 3D reconstruction quality.
arXiv Detail & Related papers (2022-05-05T17:59:55Z) - 3D-Aware Indoor Scene Synthesis with Depth Priors [62.82867334012399]
Existing methods fail to model indoor scenes due to the large diversity of room layouts and the objects inside.
We argue that indoor scenes do not have a shared intrinsic structure, and hence only using 2D images cannot adequately guide the model with the 3D geometry.
arXiv Detail & Related papers (2022-02-17T09:54:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.