OmniSyn: Synthesizing 360 Videos with Wide-baseline Panoramas
- URL: http://arxiv.org/abs/2202.08752v1
- Date: Thu, 17 Feb 2022 16:44:17 GMT
- Title: OmniSyn: Synthesizing 360 Videos with Wide-baseline Panoramas
- Authors: David Li, Yinda Zhang, Christian H\"ane, Danhang Tang, Amitabh
Varshney, Ruofei Du
- Abstract summary: Google Street View and Bing Streetside provide immersive maps with a massive collection of panoramas.
These panoramas are only available at sparse intervals along the path they are taken, resulting in visual discontinuities during navigation.
We present OmniSyn, a novel pipeline for 360deg view synthesis between wide-baseline panoramas.
- Score: 27.402727637562403
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Immersive maps such as Google Street View and Bing Streetside provide
true-to-life views with a massive collection of panoramas. However, these
panoramas are only available at sparse intervals along the path they are taken,
resulting in visual discontinuities during navigation. Prior art in view
synthesis is usually built upon a set of perspective images, a pair of
stereoscopic images, or a monocular image, but barely examines wide-baseline
panoramas, which are widely adopted in commercial platforms to optimize
bandwidth and storage usage. In this paper, we leverage the unique
characteristics of wide-baseline panoramas and present OmniSyn, a novel
pipeline for 360{\deg} view synthesis between wide-baseline panoramas. OmniSyn
predicts omnidirectional depth maps using a spherical cost volume and a
monocular skip connection, renders meshes in 360{\deg} images, and synthesizes
intermediate views with a fusion network. We demonstrate the effectiveness of
OmniSyn via comprehensive experimental results including comparison with the
state-of-the-art methods on CARLA and Matterport datasets, ablation studies,
and generalization studies on street views. We envision our work may inspire
future research for this unheeded real-world task and eventually produce a
smoother experience for navigating immersive maps.
Related papers
- DiffPano: Scalable and Consistent Text to Panorama Generation with Spherical Epipolar-Aware Diffusion [60.45000652592418]
We propose a novel text-driven panoramic generation framework, DiffPano, to achieve scalable, consistent, and diverse panoramic scene generation.
We show that DiffPano can generate consistent, diverse panoramic images with given unseen text descriptions and camera poses.
arXiv Detail & Related papers (2024-10-31T17:57:02Z) - VidPanos: Generative Panoramic Videos from Casual Panning Videos [73.77443496436749]
Panoramic image stitching provides a unified, wide-angle view of a scene that extends beyond the camera's field of view.
We present a method for synthesizing a panoramic video from a casually-captured panning video.
Our system can create video panoramas for a range of in-the-wild scenes including people, vehicles, and flowing water.
arXiv Detail & Related papers (2024-10-17T17:53:24Z) - Mixed-View Panorama Synthesis using Geospatially Guided Diffusion [15.12293324464805]
We introduce the task of mixed-view panorama synthesis.
The goal is to synthesize a novel panorama given a small set of input panoramas and a satellite image of the area.
arXiv Detail & Related papers (2024-07-12T20:12:07Z) - See360: Novel Panoramic View Interpolation [24.965259708297932]
See360 is a versatile and efficient framework for 360 panoramic view using latent space viewpoint estimation.
We show that the proposed method is generic enough to achieve real-time rendering of arbitrary views for four datasets.
arXiv Detail & Related papers (2024-01-07T09:17:32Z) - PanoGRF: Generalizable Spherical Radiance Fields for Wide-baseline
Panoramas [54.4948540627471]
We propose PanoGRF, Generalizable Spherical Radiance Fields for Wide-baseline Panoramas.
Unlike generalizable radiance fields trained on perspective images, PanoGRF avoids the information loss from panorama-to-perspective conversion.
Results on multiple panoramic datasets demonstrate that PanoGRF significantly outperforms state-of-the-art generalizable view synthesis methods.
arXiv Detail & Related papers (2023-06-02T13:35:07Z) - HORIZON: High-Resolution Semantically Controlled Panorama Synthesis [105.55531244750019]
Panorama synthesis endeavors to craft captivating 360-degree visual landscapes, immersing users in the heart of virtual worlds.
Recent breakthroughs in visual synthesis have unlocked the potential for semantic control in 2D flat images, but a direct application of these methods to panorama synthesis yields distorted content.
We unveil an innovative framework for generating high-resolution panoramas, adeptly addressing the issues of spherical distortion and edge discontinuity through sophisticated spherical modeling.
arXiv Detail & Related papers (2022-10-10T09:43:26Z) - OmniCity: Omnipotent City Understanding with Multi-level and Multi-view
Images [72.4144257192959]
The paper presents OmniCity, a new dataset for omnipotent city understanding from multi-level and multi-view images.
The dataset contains over 100K pixel-wise annotated images that are well-aligned and collected from 25K geo-locations in New York City.
With the new OmniCity dataset, we provide benchmarks for a variety of tasks including building footprint extraction, height estimation, and building plane/instance/fine-grained segmentation.
arXiv Detail & Related papers (2022-08-01T15:19:25Z) - Moving in a 360 World: Synthesizing Panoramic Parallaxes from a Single
Panorama [13.60790015417166]
We present Omnidirectional Neural Radiance Fields ( OmniNeRF), the first method to the application of parallax-enabled novel panoramic view synthesis.
We propose to augment the single RGB-D panorama by projecting back and forth between a 3D world and different 2D panoramic coordinates at different virtual camera positions.
As a result, the proposed OmniNeRF achieves convincing renderings of novel panoramic views that exhibit the parallax effect.
arXiv Detail & Related papers (2021-06-21T05:08:34Z) - Deep Multi Depth Panoramas for View Synthesis [70.9125433400375]
We present a novel scene representation - Multi Depth Panorama (MDP) - that consists of multiple RGBD$alpha$ panoramas.
MDPs are more compact than previous 3D scene representations and enable high-quality, efficient new view rendering.
arXiv Detail & Related papers (2020-08-04T20:29:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.