Deep Multi Depth Panoramas for View Synthesis
- URL: http://arxiv.org/abs/2008.01815v1
- Date: Tue, 4 Aug 2020 20:29:15 GMT
- Title: Deep Multi Depth Panoramas for View Synthesis
- Authors: Kai-En Lin, Zexiang Xu, Ben Mildenhall, Pratul P. Srinivasan, Yannick
Hold-Geoffroy, Stephen DiVerdi, Qi Sun, Kalyan Sunkavalli, and Ravi
Ramamoorthi
- Abstract summary: We present a novel scene representation - Multi Depth Panorama (MDP) - that consists of multiple RGBD$alpha$ panoramas.
MDPs are more compact than previous 3D scene representations and enable high-quality, efficient new view rendering.
- Score: 70.9125433400375
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a learning-based approach for novel view synthesis for
multi-camera 360$^{\circ}$ panorama capture rigs. Previous work constructs RGBD
panoramas from such data, allowing for view synthesis with small amounts of
translation, but cannot handle the disocclusions and view-dependent effects
that are caused by large translations. To address this issue, we present a
novel scene representation - Multi Depth Panorama (MDP) - that consists of
multiple RGBD$\alpha$ panoramas that represent both scene geometry and
appearance. We demonstrate a deep neural network-based method to reconstruct
MDPs from multi-camera 360$^{\circ}$ images. MDPs are more compact than
previous 3D scene representations and enable high-quality, efficient new view
rendering. We demonstrate this via experiments on both synthetic and real data
and comparisons with previous state-of-the-art methods spanning both
learning-based approaches and classical RGBD-based methods.
Related papers
- Pano2Room: Novel View Synthesis from a Single Indoor Panorama [20.262621556667852]
Pano2Room is designed to automatically reconstruct high-quality 3D indoor scenes from a single panoramic image.
The key idea is to initially construct a preliminary mesh from the input panorama, and iteratively refine this mesh using a panoramic RGBD inpainter.
The refined mesh is converted into a 3D Gaussian Splatting field and trained with the collected pseudo novel views.
arXiv Detail & Related papers (2024-08-21T08:19:12Z) - MVD-Fusion: Single-view 3D via Depth-consistent Multi-view Generation [54.27399121779011]
We present MVD-Fusion: a method for single-view 3D inference via generative modeling of multi-view-consistent RGB-D images.
We show that our approach can yield more accurate synthesis compared to recent state-of-the-art, including distillation-based 3D inference and prior multi-view generation methods.
arXiv Detail & Related papers (2024-04-04T17:59:57Z) - Novel View Synthesis from a Single RGBD Image for Indoor Scenes [4.292698270662031]
We propose an approach for synthesizing novel view images from a single RGBD (Red Green Blue-Depth) input.
In our method, we convert an RGBD image into a point cloud and render it from a different viewpoint, then formulate the NVS task into an image translation problem.
arXiv Detail & Related papers (2023-11-02T08:34:07Z) - PERF: Panoramic Neural Radiance Field from a Single Panorama [109.31072618058043]
PERF is a novel view synthesis framework that trains a panoramic neural radiance field from a single panorama.
We propose a novel collaborative RGBD inpainting method and a progressive inpainting-and-erasing method to lift up a 360-degree 2D scene to a 3D scene.
Our PERF can be widely used for real-world applications, such as panorama-to-3D, text-to-3D, and 3D scene stylization applications.
arXiv Detail & Related papers (2023-10-25T17:59:01Z) - Single-View View Synthesis in the Wild with Learned Adaptive Multiplane
Images [15.614631883233898]
Existing methods have shown promising results leveraging monocular depth estimation and color inpainting with layered depth representations.
We propose a new method based on the multiplane image (MPI) representation.
The experiments on both synthetic and real datasets demonstrate that our trained model works remarkably well and achieves state-of-the-art results.
arXiv Detail & Related papers (2022-05-24T02:57:16Z) - Urban Radiance Fields [77.43604458481637]
We perform 3D reconstruction and novel view synthesis from data captured by scanning platforms commonly deployed for world mapping in urban outdoor environments.
Our approach extends Neural Radiance Fields, which has been demonstrated to synthesize realistic novel images for small scenes in controlled settings.
Each of these three extensions provides significant performance improvements in experiments on Street View data.
arXiv Detail & Related papers (2021-11-29T15:58:16Z) - Semantic View Synthesis [56.47999473206778]
We tackle a new problem of semantic view synthesis -- generating free-viewpoint rendering of a synthesized scene using a semantic label map as input.
First, we focus on synthesizing the color and depth of the visible surface of the 3D scene.
We then use the synthesized color and depth to impose explicit constraints on the multiple-plane image (MPI) representation prediction process.
arXiv Detail & Related papers (2020-08-24T17:59:46Z) - 3D Photography using Context-aware Layered Depth Inpainting [50.66235795163143]
We propose a method for converting a single RGB-D input image into a 3D photo.
A learning-based inpainting model synthesizes new local color-and-depth content into the occluded region.
The resulting 3D photos can be efficiently rendered with motion parallax.
arXiv Detail & Related papers (2020-04-09T17:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.