Enhancement of Novel View Synthesis Using Omnidirectional Image
Completion
- URL: http://arxiv.org/abs/2203.09957v4
- Date: Thu, 7 Sep 2023 09:21:42 GMT
- Title: Enhancement of Novel View Synthesis Using Omnidirectional Image
Completion
- Authors: Takayuki Hara and Tatsuya Harada
- Abstract summary: We present a method for synthesizing novel views from a single 360-degree RGB-D image based on the neural radiance field (NeRF)
Experiments demonstrated that the proposed method can synthesize plausible novel views while preserving the features of the scene for both artificial and real-world data.
- Score: 61.78187618370681
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this study, we present a method for synthesizing novel views from a single
360-degree RGB-D image based on the neural radiance field (NeRF) . Prior
studies relied on the neighborhood interpolation capability of multi-layer
perceptrons to complete missing regions caused by occlusion and zooming, which
leads to artifacts. In the method proposed in this study, the input image is
reprojected to 360-degree RGB images at other camera positions, the missing
regions of the reprojected images are completed by a 2D image generative model,
and the completed images are utilized to train the NeRF. Because multiple
completed images contain inconsistencies in 3D, we introduce a method to learn
the NeRF model using a subset of completed images that cover the target scene
with less overlap of completed regions. The selection of such a subset of
images can be attributed to the maximum weight independent set problem, which
is solved through simulated annealing. Experiments demonstrated that the
proposed method can synthesize plausible novel views while preserving the
features of the scene for both artificial and real-world data.
Related papers
- ReconFusion: 3D Reconstruction with Diffusion Priors [104.73604630145847]
We present ReconFusion to reconstruct real-world scenes using only a few photos.
Our approach leverages a diffusion prior for novel view synthesis, trained on synthetic and multiview datasets.
Our method synthesizes realistic geometry and texture in underconstrained regions while preserving the appearance of observed regions.
arXiv Detail & Related papers (2023-12-05T18:59:58Z) - Improving Neural Radiance Field using Near-Surface Sampling with Point Cloud Generation [6.506009070668646]
This paper proposes a near-surface sampling framework to improve the rendering quality of NeRF.
To obtain depth information on a novel view, the paper proposes a 3D point cloud generation method and a simple refining method for projected depth from a point cloud.
arXiv Detail & Related papers (2023-10-06T10:55:34Z) - NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from
3D-aware Diffusion [107.67277084886929]
Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input.
We propose NerfDiff, which addresses this issue by distilling the knowledge of a 3D-aware conditional diffusion model (CDM) into NeRF through synthesizing and refining a set of virtual views at test time.
We further propose a novel NeRF-guided distillation algorithm that simultaneously generates 3D consistent virtual views from the CDM samples, and finetunes the NeRF based on the improved virtual views.
arXiv Detail & Related papers (2023-02-20T17:12:00Z) - Semantic 3D-aware Portrait Synthesis and Manipulation Based on
Compositional Neural Radiance Field [55.431697263581626]
We propose a Compositional Neural Radiance Field (CNeRF) for semantic 3D-aware portrait synthesis and manipulation.
CNeRF divides the image by semantic regions and learns an independent neural radiance field for each region, and finally fuses them and renders the complete image.
Compared to state-of-the-art 3D-aware GAN methods, our approach enables fine-grained semantic region manipulation, while maintaining high-quality 3D-consistent synthesis.
arXiv Detail & Related papers (2023-02-03T07:17:46Z) - 360FusionNeRF: Panoramic Neural Radiance Fields with Joint Guidance [6.528382036284374]
We present a method to synthesize novel views from a single $360circ$ panorama image based on the neural radiance field (NeRF)
We propose 360FusionNeRF, a semi-supervised learning framework where we introduce geometric supervision and semantic consistency to guide the training process.
arXiv Detail & Related papers (2022-09-28T17:30:53Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - Urban Radiance Fields [77.43604458481637]
We perform 3D reconstruction and novel view synthesis from data captured by scanning platforms commonly deployed for world mapping in urban outdoor environments.
Our approach extends Neural Radiance Fields, which has been demonstrated to synthesize realistic novel images for small scenes in controlled settings.
Each of these three extensions provides significant performance improvements in experiments on Street View data.
arXiv Detail & Related papers (2021-11-29T15:58:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.