Non-central panorama indoor dataset
- URL: http://arxiv.org/abs/2401.17075v1
- Date: Tue, 30 Jan 2024 14:56:59 GMT
- Title: Non-central panorama indoor dataset
- Authors: Bruno Berenguel-Baeta, Jesus Bermudez-Cameo, Jose J. Guerrero
- Abstract summary: We present the first dataset of non-central panoramas for indoor scene understanding.
The dataset is composed by bf 2574 RGB non-central panoramas taken in around 650 different rooms.
- Score: 5.2178708158547025
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Omnidirectional images are one of the main sources of information for
learning based scene understanding algorithms. However, annotated datasets of
omnidirectional images cannot keep the pace of these learning based algorithms
development. Among the different panoramas and in contrast to standard central
ones, non-central panoramas provide geometrical information in the distortion
of the image from which we can retrieve 3D information of the environment [2].
However, due to the lack of commercial non-central devices, up until now there
was no dataset of these kinds of panoramas. In this data paper, we present the
first dataset of non-central panoramas for indoor scene understanding. The
dataset is composed by {\bf 2574} RGB non-central panoramas taken in around 650
different rooms. Each panorama has associated a depth map and annotations to
obtain the layout of the room from the image as a structural edge map, list of
corners in the image, the 3D corners of the room and the camera pose. The
images are taken from photorealistic virtual environments and pixel-wise
automatically annotated.
Related papers
- DiffPano: Scalable and Consistent Text to Panorama Generation with Spherical Epipolar-Aware Diffusion [60.45000652592418]
We propose a novel text-driven panoramic generation framework, DiffPano, to achieve scalable, consistent, and diverse panoramic scene generation.
We show that DiffPano can generate consistent, diverse panoramic images with given unseen text descriptions and camera poses.
arXiv Detail & Related papers (2024-10-31T17:57:02Z) - 360 in the Wild: Dataset for Depth Prediction and View Synthesis [66.58513725342125]
We introduce a large scale 360$circ$ videos dataset in the wild.
This dataset has been carefully scraped from the Internet and has been captured from various locations worldwide.
Each of the 25K images constituting our dataset is provided with its respective camera's pose and depth map.
arXiv Detail & Related papers (2024-06-27T05:26:38Z) - Scaled 360 layouts: Revisiting non-central panoramas [5.2178708158547025]
We present a novel approach for 3D layout recovery of indoor environments using single non-central panoramas.
We exploit the properties of non-central projection systems in a new geometrical processing to recover the scaled layout.
arXiv Detail & Related papers (2024-02-02T14:55:36Z) - OmniSCV: An Omnidirectional Synthetic Image Generator for Computer
Vision [5.2178708158547025]
We present a tool for generating datasets of omnidirectional images with semantic and depth information.
These images are synthesized from a set of captures that are acquired in a realistic virtual environment for Unreal Engine 4.
We include in our tool photorealistic non-central-projection systems as non-central panoramas and non-central catadioptric systems.
arXiv Detail & Related papers (2024-01-30T14:40:19Z) - PanoGRF: Generalizable Spherical Radiance Fields for Wide-baseline
Panoramas [54.4948540627471]
We propose PanoGRF, Generalizable Spherical Radiance Fields for Wide-baseline Panoramas.
Unlike generalizable radiance fields trained on perspective images, PanoGRF avoids the information loss from panorama-to-perspective conversion.
Results on multiple panoramic datasets demonstrate that PanoGRF significantly outperforms state-of-the-art generalizable view synthesis methods.
arXiv Detail & Related papers (2023-06-02T13:35:07Z) - PanoContext-Former: Panoramic Total Scene Understanding with a
Transformer [37.51637352106841]
Panoramic image enables deeper understanding and more holistic perception of $360circ$ surrounding environment.
In this paper, we propose a novel method using depth prior for holistic indoor scene understanding.
In addition, we introduce a real-world dataset for scene understanding, including photo-realistic panoramas, high-fidelity depth images, accurately annotated room layouts, and oriented object bounding boxes and shapes.
arXiv Detail & Related papers (2023-05-21T16:20:57Z) - DeepPanoContext: Panoramic 3D Scene Understanding with Holistic Scene
Context Graph and Relation-based Optimization [66.25948693095604]
We propose a novel method for panoramic 3D scene understanding which recovers the 3D room layout and the shape, pose, position, and semantic category for each object from a single full-view panorama image.
Experiments demonstrate that our method outperforms existing methods on panoramic scene understanding in terms of both geometry accuracy and object arrangement.
arXiv Detail & Related papers (2021-08-24T13:55:29Z) - Moving in a 360 World: Synthesizing Panoramic Parallaxes from a Single
Panorama [13.60790015417166]
We present Omnidirectional Neural Radiance Fields ( OmniNeRF), the first method to the application of parallax-enabled novel panoramic view synthesis.
We propose to augment the single RGB-D panorama by projecting back and forth between a 3D world and different 2D panoramic coordinates at different virtual camera positions.
As a result, the proposed OmniNeRF achieves convincing renderings of novel panoramic views that exhibit the parallax effect.
arXiv Detail & Related papers (2021-06-21T05:08:34Z) - Geometry-Guided Street-View Panorama Synthesis from Satellite Imagery [80.6282101835164]
We present a new approach for synthesizing a novel street-view panorama given an overhead satellite image.
Our method generates a Google's omnidirectional street-view type panorama, as if it is captured from the same geographical location as the center of the satellite patch.
arXiv Detail & Related papers (2021-03-02T10:27:05Z) - Lighthouse: Predicting Lighting Volumes for Spatially-Coherent
Illumination [84.00096195633793]
We present a deep learning solution for estimating the incident illumination at any 3D location within a scene from an input narrow-baseline stereo image pair.
Our model is trained without any ground truth 3D data and only requires a held-out perspective view near the input stereo pair and a spherical panorama taken within each scene as supervision.
We demonstrate that our method can predict consistent spatially-varying lighting that is convincing enough to plausibly relight and insert highly specular virtual objects into real images.
arXiv Detail & Related papers (2020-03-18T17:46:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.