PanoDreamer: 3D Panorama Synthesis from a Single Image
- URL: http://arxiv.org/abs/2412.04827v1
- Date: Fri, 06 Dec 2024 07:42:48 GMT
- Title: PanoDreamer: 3D Panorama Synthesis from a Single Image
- Authors: Avinash Paliwal, Xilong Zhou, Andrii Tsarov, Nima Khademi Kalantari,
- Abstract summary: PanoDreamer is a novel method for producing a coherent 360$circ$ 3D scene from a single input image.
We frame the problem as single-image panorama and depth estimation.
- Score: 7.654478445523882
- License:
- Abstract: In this paper, we present PanoDreamer, a novel method for producing a coherent 360$^\circ$ 3D scene from a single input image. Unlike existing methods that generate the scene sequentially, we frame the problem as single-image panorama and depth estimation. Once the coherent panoramic image and its corresponding depth are obtained, the scene can be reconstructed by inpainting the small occluded regions and projecting them into 3D space. Our key contribution is formulating single-image panorama and depth estimation as two optimization tasks and introducing alternating minimization strategies to effectively solve their objectives. We demonstrate that our approach outperforms existing techniques in single-image 360$^\circ$ scene reconstruction in terms of consistency and overall quality.
Related papers
- LayerPano3D: Layered 3D Panorama for Hyper-Immersive Scene Generation [105.52153675890408]
3D immersive scene generation is a challenging yet critical task in computer vision and graphics.
LayerPano3D is a novel framework for full-view, explorable panoramic 3D scene generation from a single text prompt.
arXiv Detail & Related papers (2024-08-23T17:50:23Z) - Pano2Room: Novel View Synthesis from a Single Indoor Panorama [20.262621556667852]
Pano2Room is designed to automatically reconstruct high-quality 3D indoor scenes from a single panoramic image.
The key idea is to initially construct a preliminary mesh from the input panorama, and iteratively refine this mesh using a panoramic RGBD inpainter.
The refined mesh is converted into a 3D Gaussian Splatting field and trained with the collected pseudo novel views.
arXiv Detail & Related papers (2024-08-21T08:19:12Z) - DreamScene360: Unconstrained Text-to-3D Scene Generation with Panoramic Gaussian Splatting [56.101576795566324]
We present a text-to-3D 360$circ$ scene generation pipeline.
Our approach utilizes the generative power of a 2D diffusion model and prompt self-refinement.
Our method offers a globally consistent 3D scene within a 360$circ$ perspective.
arXiv Detail & Related papers (2024-04-10T10:46:59Z) - Scaled 360 layouts: Revisiting non-central panoramas [5.2178708158547025]
We present a novel approach for 3D layout recovery of indoor environments using single non-central panoramas.
We exploit the properties of non-central projection systems in a new geometrical processing to recover the scaled layout.
arXiv Detail & Related papers (2024-02-02T14:55:36Z) - PERF: Panoramic Neural Radiance Field from a Single Panorama [109.31072618058043]
PERF is a novel view synthesis framework that trains a panoramic neural radiance field from a single panorama.
We propose a novel collaborative RGBD inpainting method and a progressive inpainting-and-erasing method to lift up a 360-degree 2D scene to a 3D scene.
Our PERF can be widely used for real-world applications, such as panorama-to-3D, text-to-3D, and 3D scene stylization applications.
arXiv Detail & Related papers (2023-10-25T17:59:01Z) - High-Resolution Depth Estimation for 360-degree Panoramas through
Perspective and Panoramic Depth Images Registration [3.4583104874165804]
We propose a novel approach to compute high-resolution (2048x1024 and higher) depths for panoramas.
Our method generates qualitatively better results than existing panorama-based methods, and further outperforms them quantitatively on datasets unseen by these methods.
arXiv Detail & Related papers (2022-10-19T09:25:12Z) - Solving Inverse Problems with NerfGANs [88.24518907451868]
We introduce a novel framework for solving inverse problems using NeRF-style generative models.
We show that naively optimizing the latent space leads to artifacts and poor novel view rendering.
We propose a novel radiance field regularization method to obtain better 3-D surfaces and improved novel views given single view observations.
arXiv Detail & Related papers (2021-12-16T17:56:58Z) - DeepPanoContext: Panoramic 3D Scene Understanding with Holistic Scene
Context Graph and Relation-based Optimization [66.25948693095604]
We propose a novel method for panoramic 3D scene understanding which recovers the 3D room layout and the shape, pose, position, and semantic category for each object from a single full-view panorama image.
Experiments demonstrate that our method outperforms existing methods on panoramic scene understanding in terms of both geometry accuracy and object arrangement.
arXiv Detail & Related papers (2021-08-24T13:55:29Z) - Deep Multi Depth Panoramas for View Synthesis [70.9125433400375]
We present a novel scene representation - Multi Depth Panorama (MDP) - that consists of multiple RGBD$alpha$ panoramas.
MDPs are more compact than previous 3D scene representations and enable high-quality, efficient new view rendering.
arXiv Detail & Related papers (2020-08-04T20:29:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.