Scaled 360 layouts: Revisiting non-central panoramas
- URL: http://arxiv.org/abs/2402.01466v1
- Date: Fri, 2 Feb 2024 14:55:36 GMT
- Title: Scaled 360 layouts: Revisiting non-central panoramas
- Authors: Bruno Berenguel-Baeta, Jesus Bermudez-Cameo, Jose J. Guerrero
- Abstract summary: We present a novel approach for 3D layout recovery of indoor environments using single non-central panoramas.
We exploit the properties of non-central projection systems in a new geometrical processing to recover the scaled layout.
- Score: 5.2178708158547025
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: From a non-central panorama, 3D lines can be recovered by geometric
reasoning. However, their sensitivity to noise and the complex geometric
modeling required has led these panoramas being very little investigated. In
this work we present a novel approach for 3D layout recovery of indoor
environments using single non-central panoramas. We obtain the boundaries of
the structural lines of the room from a non-central panorama using deep
learning and exploit the properties of non-central projection systems in a new
geometrical processing to recover the scaled layout. We solve the problem for
Manhattan environments, handling occlusions, and also for Atlanta environments
in an unified method. The experiments performed improve the state-of-the-art
methods for 3D layout recovery from a single panorama. Our approach is the
first work using deep learning with non-central panoramas and recovering the
scale of single panorama layouts.
Related papers
- Pano2Room: Novel View Synthesis from a Single Indoor Panorama [20.262621556667852]
Pano2Room is designed to automatically reconstruct high-quality 3D indoor scenes from a single panoramic image.
The key idea is to initially construct a preliminary mesh from the input panorama, and iteratively refine this mesh using a panoramic RGBD inpainter.
The refined mesh is converted into a 3D Gaussian Splatting field and trained with the collected pseudo novel views.
arXiv Detail & Related papers (2024-08-21T08:19:12Z) - Non-central panorama indoor dataset [5.2178708158547025]
We present the first dataset of non-central panoramas for indoor scene understanding.
The dataset is composed by bf 2574 RGB non-central panoramas taken in around 650 different rooms.
arXiv Detail & Related papers (2024-01-30T14:56:59Z) - Atlanta Scaled layouts from non-central panoramas [5.2178708158547025]
We present a novel approach for 3D layout recovery of indoor environments using a non-central acquisition system.
Our approach is the first work using deep learning on non-central panoramas and recovering scaled layouts from single panoramas.
arXiv Detail & Related papers (2024-01-30T14:39:38Z) - PERF: Panoramic Neural Radiance Field from a Single Panorama [109.31072618058043]
PERF is a novel view synthesis framework that trains a panoramic neural radiance field from a single panorama.
We propose a novel collaborative RGBD inpainting method and a progressive inpainting-and-erasing method to lift up a 360-degree 2D scene to a 3D scene.
Our PERF can be widely used for real-world applications, such as panorama-to-3D, text-to-3D, and 3D scene stylization applications.
arXiv Detail & Related papers (2023-10-25T17:59:01Z) - PanoGRF: Generalizable Spherical Radiance Fields for Wide-baseline
Panoramas [54.4948540627471]
We propose PanoGRF, Generalizable Spherical Radiance Fields for Wide-baseline Panoramas.
Unlike generalizable radiance fields trained on perspective images, PanoGRF avoids the information loss from panorama-to-perspective conversion.
Results on multiple panoramic datasets demonstrate that PanoGRF significantly outperforms state-of-the-art generalizable view synthesis methods.
arXiv Detail & Related papers (2023-06-02T13:35:07Z) - PanoViT: Vision Transformer for Room Layout Estimation from a Single
Panoramic Image [11.053777620735175]
PanoViT is a panorama vision transformer to estimate the room layout from a single panoramic image.
Compared to CNN models, our PanoViT is more proficient in learning global information from the panoramic image.
Our method outperforms state-of-the-art solutions in room layout prediction accuracy.
arXiv Detail & Related papers (2022-12-23T05:37:11Z) - GPR-Net: Multi-view Layout Estimation via a Geometry-aware Panorama
Registration Network [44.06968418800436]
We present a complete panoramic layout estimation framework that jointly learns panorama registration and layout estimation given a pair of panoramas.
The major improvement over PSMNet comes from a novel Geometry-aware Panorama Registration Network or GPR-Net.
Experimental results indicate that our method achieves state-of-the-art performance in both panorama registration and layout estimation on a large-scale indoor panorama dataset ZInD.
arXiv Detail & Related papers (2022-10-20T17:10:41Z) - Neural 3D Scene Reconstruction with the Manhattan-world Assumption [58.90559966227361]
This paper addresses the challenge of reconstructing 3D indoor scenes from multi-view images.
Planar constraints can be conveniently integrated into the recent implicit neural representation-based reconstruction methods.
The proposed method outperforms previous methods by a large margin on 3D reconstruction quality.
arXiv Detail & Related papers (2022-05-05T17:59:55Z) - Solving Inverse Problems with NerfGANs [88.24518907451868]
We introduce a novel framework for solving inverse problems using NeRF-style generative models.
We show that naively optimizing the latent space leads to artifacts and poor novel view rendering.
We propose a novel radiance field regularization method to obtain better 3-D surfaces and improved novel views given single view observations.
arXiv Detail & Related papers (2021-12-16T17:56:58Z) - LED2-Net: Monocular 360 Layout Estimation via Differentiable Depth
Rendering [59.63979143021241]
We formulate the task of 360 layout estimation as a problem of predicting depth on the horizon line of a panorama.
We propose the Differentiable Depth Rendering procedure to make the conversion from layout to depth prediction differentiable.
Our method achieves state-of-the-art performance on numerous 360 layout benchmark datasets.
arXiv Detail & Related papers (2021-04-01T15:48:41Z) - Deep Multi Depth Panoramas for View Synthesis [70.9125433400375]
We present a novel scene representation - Multi Depth Panorama (MDP) - that consists of multiple RGBD$alpha$ panoramas.
MDPs are more compact than previous 3D scene representations and enable high-quality, efficient new view rendering.
arXiv Detail & Related papers (2020-08-04T20:29:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.