PanoGRF: Generalizable Spherical Radiance Fields for Wide-baseline
Panoramas
- URL: http://arxiv.org/abs/2306.01531v2
- Date: Wed, 6 Dec 2023 03:39:15 GMT
- Title: PanoGRF: Generalizable Spherical Radiance Fields for Wide-baseline
Panoramas
- Authors: Zheng Chen, Yan-Pei Cao, Yuan-Chen Guo, Chen Wang, Ying Shan, Song-Hai
Zhang
- Abstract summary: We propose PanoGRF, Generalizable Spherical Radiance Fields for Wide-baseline Panoramas.
Unlike generalizable radiance fields trained on perspective images, PanoGRF avoids the information loss from panorama-to-perspective conversion.
Results on multiple panoramic datasets demonstrate that PanoGRF significantly outperforms state-of-the-art generalizable view synthesis methods.
- Score: 54.4948540627471
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Achieving an immersive experience enabling users to explore virtual
environments with six degrees of freedom (6DoF) is essential for various
applications such as virtual reality (VR). Wide-baseline panoramas are commonly
used in these applications to reduce network bandwidth and storage
requirements. However, synthesizing novel views from these panoramas remains a
key challenge. Although existing neural radiance field methods can produce
photorealistic views under narrow-baseline and dense image captures, they tend
to overfit the training views when dealing with \emph{wide-baseline} panoramas
due to the difficulty in learning accurate geometry from sparse $360^{\circ}$
views. To address this problem, we propose PanoGRF, Generalizable Spherical
Radiance Fields for Wide-baseline Panoramas, which construct spherical radiance
fields incorporating $360^{\circ}$ scene priors. Unlike generalizable radiance
fields trained on perspective images, PanoGRF avoids the information loss from
panorama-to-perspective conversion and directly aggregates geometry and
appearance features of 3D sample points from each panoramic view based on
spherical projection. Moreover, as some regions of the panorama are only
visible from one view while invisible from others under wide baseline settings,
PanoGRF incorporates $360^{\circ}$ monocular depth priors into spherical depth
estimation to improve the geometry features. Experimental results on multiple
panoramic datasets demonstrate that PanoGRF significantly outperforms
state-of-the-art generalizable view synthesis methods for wide-baseline
panoramas (e.g., OmniSyn) and perspective images (e.g., IBRNet, NeuRay).
Related papers
- DiffPano: Scalable and Consistent Text to Panorama Generation with Spherical Epipolar-Aware Diffusion [60.45000652592418]
We propose a novel text-driven panoramic generation framework, DiffPano, to achieve scalable, consistent, and diverse panoramic scene generation.
We show that DiffPano can generate consistent, diverse panoramic images with given unseen text descriptions and camera poses.
arXiv Detail & Related papers (2024-10-31T17:57:02Z) - LayerPano3D: Layered 3D Panorama for Hyper-Immersive Scene Generation [105.52153675890408]
3D immersive scene generation is a challenging yet critical task in computer vision and graphics.
LayerPano3D is a novel framework for full-view, explorable panoramic 3D scene generation from a single text prompt.
arXiv Detail & Related papers (2024-08-23T17:50:23Z) - PERF: Panoramic Neural Radiance Field from a Single Panorama [109.31072618058043]
PERF is a novel view synthesis framework that trains a panoramic neural radiance field from a single panorama.
We propose a novel collaborative RGBD inpainting method and a progressive inpainting-and-erasing method to lift up a 360-degree 2D scene to a 3D scene.
Our PERF can be widely used for real-world applications, such as panorama-to-3D, text-to-3D, and 3D scene stylization applications.
arXiv Detail & Related papers (2023-10-25T17:59:01Z) - 360-Degree Panorama Generation from Few Unregistered NFoV Images [16.05306624008911]
360$circ$ panoramas are extensively utilized as environmental light sources in computer graphics.
capturing a 360$circ$ $times$ 180$circ$ panorama poses challenges due to specialized and costly equipment.
We propose a novel pipeline called PanoDiff, which efficiently generates complete 360$circ$ panoramas.
arXiv Detail & Related papers (2023-08-28T16:21:51Z) - OmniSyn: Synthesizing 360 Videos with Wide-baseline Panoramas [27.402727637562403]
Google Street View and Bing Streetside provide immersive maps with a massive collection of panoramas.
These panoramas are only available at sparse intervals along the path they are taken, resulting in visual discontinuities during navigation.
We present OmniSyn, a novel pipeline for 360deg view synthesis between wide-baseline panoramas.
arXiv Detail & Related papers (2022-02-17T16:44:17Z) - Moving in a 360 World: Synthesizing Panoramic Parallaxes from a Single
Panorama [13.60790015417166]
We present Omnidirectional Neural Radiance Fields ( OmniNeRF), the first method to the application of parallax-enabled novel panoramic view synthesis.
We propose to augment the single RGB-D panorama by projecting back and forth between a 3D world and different 2D panoramic coordinates at different virtual camera positions.
As a result, the proposed OmniNeRF achieves convincing renderings of novel panoramic views that exhibit the parallax effect.
arXiv Detail & Related papers (2021-06-21T05:08:34Z) - Deep Multi Depth Panoramas for View Synthesis [70.9125433400375]
We present a novel scene representation - Multi Depth Panorama (MDP) - that consists of multiple RGBD$alpha$ panoramas.
MDPs are more compact than previous 3D scene representations and enable high-quality, efficient new view rendering.
arXiv Detail & Related papers (2020-08-04T20:29:15Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.