CompNVS: Novel View Synthesis with Scene Completion
- URL: http://arxiv.org/abs/2207.11467v1
- Date: Sat, 23 Jul 2022 09:03:13 GMT
- Title: CompNVS: Novel View Synthesis with Scene Completion
- Authors: Zuoyue Li, Tianxing Fan, Zhenqiang Li, Zhaopeng Cui, Yoichi Sato, Marc
Pollefeys, Martin R. Oswald
- Abstract summary: We propose a generative pipeline performing on a sparse grid-based neural scene representation to complete unobserved scene parts.
We process encoded image features in 3D space with a geometry completion network and a subsequent texture inpainting network to extrapolate the missing area.
Photorealistic image sequences can be finally obtained via consistency-relevant differentiable rendering.
- Score: 83.19663671794596
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a scalable framework for novel view synthesis from RGB-D images
with largely incomplete scene coverage. While generative neural approaches have
demonstrated spectacular results on 2D images, they have not yet achieved
similar photorealistic results in combination with scene completion where a
spatial 3D scene understanding is essential. To this end, we propose a
generative pipeline performing on a sparse grid-based neural scene
representation to complete unobserved scene parts via a learned distribution of
scenes in a 2.5D-3D-2.5D manner. We process encoded image features in 3D space
with a geometry completion network and a subsequent texture inpainting network
to extrapolate the missing area. Photorealistic image sequences can be finally
obtained via consistency-relevant differentiable rendering. Comprehensive
experiments show that the graphical outputs of our method outperform the state
of the art, especially within unobserved scene parts.
Related papers
- Behind the Veil: Enhanced Indoor 3D Scene Reconstruction with Occluded Surfaces Completion [15.444301186927142]
We present a novel indoor 3D reconstruction method with occluded surface completion, given a sequence of depth readings.
Our method tackles the task of completing the occluded scene surfaces, resulting in a complete 3D scene mesh.
We evaluate the proposed method on the 3D Completed Room Scene (3D-CRS) and iTHOR datasets.
arXiv Detail & Related papers (2024-04-03T21:18:27Z) - Denoising Diffusion via Image-Based Rendering [54.20828696348574]
We introduce the first diffusion model able to perform fast, detailed reconstruction and generation of real-world 3D scenes.
First, we introduce a new neural scene representation, IB-planes, that can efficiently and accurately represent large 3D scenes.
Second, we propose a denoising-diffusion framework to learn a prior over this novel 3D scene representation, using only 2D images.
arXiv Detail & Related papers (2024-02-05T19:00:45Z) - Blocks2World: Controlling Realistic Scenes with Editable Primitives [5.541644538483947]
We present Blocks2World, a novel method for 3D scene rendering and editing.
Our technique begins by extracting 3D parallelepipeds from various objects in a given scene using convex decomposition.
The next stage involves training a conditioned model that learns to generate images from the 2D-rendered convex primitives.
arXiv Detail & Related papers (2023-07-07T21:38:50Z) - SSR-2D: Semantic 3D Scene Reconstruction from 2D Images [54.46126685716471]
In this work, we explore a central 3D scene modeling task, namely, semantic scene reconstruction without using any 3D annotations.
The key idea of our approach is to design a trainable model that employs both incomplete 3D reconstructions and their corresponding source RGB-D images.
Our method achieves the state-of-the-art performance of semantic scene completion on two large-scale benchmark datasets MatterPort3D and ScanNet.
arXiv Detail & Related papers (2023-02-07T17:47:52Z) - SceneDreamer: Unbounded 3D Scene Generation from 2D Image Collections [49.802462165826554]
We present SceneDreamer, an unconditional generative model for unbounded 3D scenes.
Our framework is learned from in-the-wild 2D image collections only, without any 3D annotations.
arXiv Detail & Related papers (2023-02-02T18:59:16Z) - 3inGAN: Learning a 3D Generative Model from Images of a Self-similar
Scene [34.2144933185175]
3inGAN is an unconditional 3D generative model trained from 2D images of a single self-similar 3D scene.
We show results on semi-stochastic scenes of varying scale and complexity, obtained from real and synthetic sources.
arXiv Detail & Related papers (2022-11-27T18:03:21Z) - Realistic Image Synthesis with Configurable 3D Scene Layouts [59.872657806747576]
We propose a novel approach to realistic-looking image synthesis based on a 3D scene layout.
Our approach takes a 3D scene with semantic class labels as input and trains a 3D scene painting network.
With the trained painting network, realistic-looking images for the input 3D scene can be rendered and manipulated.
arXiv Detail & Related papers (2021-08-23T09:44:56Z) - SceneGraphFusion: Incremental 3D Scene Graph Prediction from RGB-D
Sequences [76.28527350263012]
We propose a method to incrementally build up semantic scene graphs from a 3D environment given a sequence of RGB-D frames.
We aggregate PointNet features from primitive scene components by means of a graph neural network.
Our approach outperforms 3D scene graph prediction methods by a large margin and its accuracy is on par with other 3D semantic and panoptic segmentation methods while running at 35 Hz.
arXiv Detail & Related papers (2021-03-27T13:00:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.