360-Degree Panorama Generation from Few Unregistered NFoV Images
- URL: http://arxiv.org/abs/2308.14686v1
- Date: Mon, 28 Aug 2023 16:21:51 GMT
- Title: 360-Degree Panorama Generation from Few Unregistered NFoV Images
- Authors: Jionghao Wang, Ziyu Chen, Jun Ling, Rong Xie and Li Song
- Abstract summary: 360$circ$ panoramas are extensively utilized as environmental light sources in computer graphics.
capturing a 360$circ$ $times$ 180$circ$ panorama poses challenges due to specialized and costly equipment.
We propose a novel pipeline called PanoDiff, which efficiently generates complete 360$circ$ panoramas.
- Score: 16.05306624008911
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 360$^\circ$ panoramas are extensively utilized as environmental light sources
in computer graphics. However, capturing a 360$^\circ$ $\times$ 180$^\circ$
panorama poses challenges due to the necessity of specialized and costly
equipment, and additional human resources. Prior studies develop various
learning-based generative methods to synthesize panoramas from a single Narrow
Field-of-View (NFoV) image, but they are limited in alterable input patterns,
generation quality, and controllability. To address these issues, we propose a
novel pipeline called PanoDiff, which efficiently generates complete
360$^\circ$ panoramas using one or more unregistered NFoV images captured from
arbitrary angles. Our approach has two primary components to overcome the
limitations. Firstly, a two-stage angle prediction module to handle various
numbers of NFoV inputs. Secondly, a novel latent diffusion-based panorama
generation model uses incomplete panorama and text prompts as control signals
and utilizes several geometric augmentation schemes to ensure geometric
properties in generated panoramas. Experiments show that PanoDiff achieves
state-of-the-art panoramic generation quality and high controllability, making
it suitable for applications such as content editing.
Related papers
- Taming Stable Diffusion for Text to 360° Panorama Image Generation [74.69314801406763]
We introduce a novel dual-branch diffusion model named PanFusion to generate a 360-degree image from a text prompt.
We propose a unique cross-attention mechanism with projection awareness to minimize distortion during the collaborative denoising process.
arXiv Detail & Related papers (2024-04-11T17:46:14Z) - DreamScene360: Unconstrained Text-to-3D Scene Generation with Panoramic Gaussian Splatting [56.101576795566324]
We present a text-to-3D 360$circ$ scene generation pipeline.
Our approach utilizes the generative power of a 2D diffusion model and prompt self-refinement.
Our method offers a globally consistent 3D scene within a 360$circ$ perspective.
arXiv Detail & Related papers (2024-04-10T10:46:59Z) - Scaled 360 layouts: Revisiting non-central panoramas [5.2178708158547025]
We present a novel approach for 3D layout recovery of indoor environments using single non-central panoramas.
We exploit the properties of non-central projection systems in a new geometrical processing to recover the scaled layout.
arXiv Detail & Related papers (2024-02-02T14:55:36Z) - PERF: Panoramic Neural Radiance Field from a Single Panorama [109.31072618058043]
PERF is a novel view synthesis framework that trains a panoramic neural radiance field from a single panorama.
We propose a novel collaborative RGBD inpainting method and a progressive inpainting-and-erasing method to lift up a 360-degree 2D scene to a 3D scene.
Our PERF can be widely used for real-world applications, such as panorama-to-3D, text-to-3D, and 3D scene stylization applications.
arXiv Detail & Related papers (2023-10-25T17:59:01Z) - PanoGRF: Generalizable Spherical Radiance Fields for Wide-baseline
Panoramas [54.4948540627471]
We propose PanoGRF, Generalizable Spherical Radiance Fields for Wide-baseline Panoramas.
Unlike generalizable radiance fields trained on perspective images, PanoGRF avoids the information loss from panorama-to-perspective conversion.
Results on multiple panoramic datasets demonstrate that PanoGRF significantly outperforms state-of-the-art generalizable view synthesis methods.
arXiv Detail & Related papers (2023-06-02T13:35:07Z) - Panoramic Image-to-Image Translation [37.9486466936501]
We tackle the challenging task of Panoramic Image-to-Image translation (Pano-I2I) for the first time.
This task is difficult due to the geometric distortion of panoramic images and the lack of a panoramic image dataset with diverse conditions, like weather or time.
We propose a panoramic distortion-aware I2I model that preserves the structure of the panoramic images while consistently translating their global style referenced from a pinhole image.
arXiv Detail & Related papers (2023-04-11T04:08:58Z) - Behind Every Domain There is a Shift: Adapting Distortion-aware Vision Transformers for Panoramic Semantic Segmentation [73.48323921632506]
We address panoramic semantic segmentation which is under-explored due to two critical challenges.
First, we propose an upgraded Transformer for Panoramic Semantic, i.e., Trans4PASS+, equipped with Deformable Patch Embedding (DPE) and Deformable (DMLPv2) modules.
Second, we enhance the Mutual Prototypical Adaptation (MPA) strategy via pseudo-label rectification for unsupervised domain adaptive panoramic segmentation.
Third, aside from Pinhole-to-Panoramic (Pin2Pan) adaptation, we create a new dataset (SynPASS) with 9,080 panoramic images
arXiv Detail & Related papers (2022-07-25T00:42:38Z) - Moving in a 360 World: Synthesizing Panoramic Parallaxes from a Single
Panorama [13.60790015417166]
We present Omnidirectional Neural Radiance Fields ( OmniNeRF), the first method to the application of parallax-enabled novel panoramic view synthesis.
We propose to augment the single RGB-D panorama by projecting back and forth between a 3D world and different 2D panoramic coordinates at different virtual camera positions.
As a result, the proposed OmniNeRF achieves convincing renderings of novel panoramic views that exhibit the parallax effect.
arXiv Detail & Related papers (2021-06-21T05:08:34Z) - Deep Multi Depth Panoramas for View Synthesis [70.9125433400375]
We present a novel scene representation - Multi Depth Panorama (MDP) - that consists of multiple RGBD$alpha$ panoramas.
MDPs are more compact than previous 3D scene representations and enable high-quality, efficient new view rendering.
arXiv Detail & Related papers (2020-08-04T20:29:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.