High-Resolution Depth Estimation for 360-degree Panoramas through
Perspective and Panoramic Depth Images Registration
- URL: http://arxiv.org/abs/2210.10414v2
- Date: Thu, 20 Oct 2022 00:37:25 GMT
- Title: High-Resolution Depth Estimation for 360-degree Panoramas through
Perspective and Panoramic Depth Images Registration
- Authors: Chi-Han Peng and Jiayao Zhang
- Abstract summary: We propose a novel approach to compute high-resolution (2048x1024 and higher) depths for panoramas.
Our method generates qualitatively better results than existing panorama-based methods, and further outperforms them quantitatively on datasets unseen by these methods.
- Score: 3.4583104874165804
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel approach to compute high-resolution (2048x1024 and higher)
depths for panoramas that is significantly faster and qualitatively and
qualitatively more accurate than the current state-of-the-art method
(360MonoDepth). As traditional neural network-based methods have limitations in
the output image sizes (up to 1024x512) due to GPU memory constraints, both
360MonoDepth and our method rely on stitching multiple perspective disparity or
depth images to come out a unified panoramic depth map. However, to achieve
globally consistent stitching, 360MonoDepth relied on solving extensive
disparity map alignment and Poisson-based blending problems, leading to high
computation time. Instead, we propose to use an existing panoramic depth map
(computed in real-time by any panorama-based method) as the common target for
the individual perspective depth maps to register to. This key idea made
producing globally consistent stitching results from a straightforward task.
Our experiments show that our method generates qualitatively better results
than existing panorama-based methods, and further outperforms them
quantitatively on datasets unseen by these methods.
Related papers
- Refinement of Monocular Depth Maps via Multi-View Differentiable Rendering [4.717325308876748]
We present a novel approach to generate view consistent and detailed depth maps from a number of posed images.
We leverage advances in monocular depth estimation, which generate topologically complete, but metrically inaccurate depth maps.
Our method is able to generate dense, detailed, high-quality depth maps, also in challenging indoor scenarios, and outperforms state-of-the-art depth reconstruction approaches.
arXiv Detail & Related papers (2024-10-04T18:50:28Z) - Robust and Flexible Omnidirectional Depth Estimation with Multiple 360° Cameras [8.850391039025077]
We use geometric constraints and redundant information of multiple 360-degree cameras to achieve robust and flexible omnidirectional depth estimation.
Our two algorithms achieve state-of-the-art performance, accurately predicting depth maps even when provided with soiled panorama inputs.
arXiv Detail & Related papers (2024-09-23T07:31:48Z) - Pixel-Aligned Multi-View Generation with Depth Guided Decoder [86.1813201212539]
We propose a novel method for pixel-level image-to-multi-view generation.
Unlike prior work, we incorporate attention layers across multi-view images in the VAE decoder of a latent video diffusion model.
Our model enables better pixel alignment across multi-view images.
arXiv Detail & Related papers (2024-08-26T04:56:41Z) - Generative Powers of Ten [60.6740997942711]
We present a method that uses a text-to-image model to generate consistent content across multiple image scales.
We achieve this through a joint multi-scale diffusion sampling approach.
Our method enables deeper levels of zoom than traditional super-resolution methods.
arXiv Detail & Related papers (2023-12-04T18:59:25Z) - SphereDepth: Panorama Depth Estimation from Spherical Domain [17.98608948955211]
This paper proposes SphereDepth, a novel panorama depth estimation method.
It predicts the depth directly on the spherical mesh without projection preprocessing.
It achieves comparable results with the state-of-the-art methods of panorama depth estimation.
arXiv Detail & Related papers (2022-08-29T16:50:19Z) - 360MonoDepth: High-Resolution 360{\deg} Monocular Depth Estimation [15.65828728205071]
monocular depth estimation remains a challenge for 360deg data.
Current CNN-based methods do not support such high resolutions due to limited GPU memory.
We propose a flexible framework for monocular depth estimation from high-resolution 360deg images using tangent images.
arXiv Detail & Related papers (2021-11-30T18:57:29Z) - Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo [103.08512487830669]
We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
arXiv Detail & Related papers (2021-10-11T20:20:03Z) - Boosting Monocular Depth Estimation Models to High-Resolution via
Content-Adaptive Multi-Resolution Merging [14.279471205248534]
We show how a consistent scene structure and high-frequency details affect depth estimation performance.
We present a double estimation method that improves the whole-image depth estimation and a patch selection method that adds local details.
We demonstrate that by merging estimations at different resolutions with changing context, we can generate multi-megapixel depth maps with a high level of detail.
arXiv Detail & Related papers (2021-05-28T17:55:15Z) - Towards Unpaired Depth Enhancement and Super-Resolution in the Wild [121.96527719530305]
State-of-the-art data-driven methods of depth map super-resolution rely on registered pairs of low- and high-resolution depth maps of the same scenes.
We consider an approach to depth map enhancement based on learning from unpaired data.
arXiv Detail & Related papers (2021-05-25T16:19:16Z) - Depth image denoising using nuclear norm and learning graph model [107.51199787840066]
Group-based image restoration methods are more effective in gathering the similarity among patches.
For each patch, we find and group the most similar patches within a searching window.
The proposed method is superior to other current state-of-the-art denoising methods in both subjective and objective criterion.
arXiv Detail & Related papers (2020-08-09T15:12:16Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.