High-Resolution Depth Estimation for 360-degree Panoramas through
Perspective and Panoramic Depth Images Registration
- URL: http://arxiv.org/abs/2210.10414v2
- Date: Thu, 20 Oct 2022 00:37:25 GMT
- Title: High-Resolution Depth Estimation for 360-degree Panoramas through
Perspective and Panoramic Depth Images Registration
- Authors: Chi-Han Peng and Jiayao Zhang
- Abstract summary: We propose a novel approach to compute high-resolution (2048x1024 and higher) depths for panoramas.
Our method generates qualitatively better results than existing panorama-based methods, and further outperforms them quantitatively on datasets unseen by these methods.
- Score: 3.4583104874165804
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel approach to compute high-resolution (2048x1024 and higher)
depths for panoramas that is significantly faster and qualitatively and
qualitatively more accurate than the current state-of-the-art method
(360MonoDepth). As traditional neural network-based methods have limitations in
the output image sizes (up to 1024x512) due to GPU memory constraints, both
360MonoDepth and our method rely on stitching multiple perspective disparity or
depth images to come out a unified panoramic depth map. However, to achieve
globally consistent stitching, 360MonoDepth relied on solving extensive
disparity map alignment and Poisson-based blending problems, leading to high
computation time. Instead, we propose to use an existing panoramic depth map
(computed in real-time by any panorama-based method) as the common target for
the individual perspective depth maps to register to. This key idea made
producing globally consistent stitching results from a straightforward task.
Our experiments show that our method generates qualitatively better results
than existing panorama-based methods, and further outperforms them
quantitatively on datasets unseen by these methods.
Related papers
- Era3D: High-Resolution Multiview Diffusion using Efficient Row-wise Attention [87.02613021058484]
We introduce Era3D, a novel multiview diffusion method that generates high-resolution multiview images from a single-view image.
Era3D generates high-quality multiview images with up to a 512*512 resolution while reducing complexity by 12x times.
arXiv Detail & Related papers (2024-05-19T17:13:16Z) - Generative Powers of Ten [60.6740997942711]
We present a method that uses a text-to-image model to generate consistent content across multiple image scales.
We achieve this through a joint multi-scale diffusion sampling approach.
Our method enables deeper levels of zoom than traditional super-resolution methods.
arXiv Detail & Related papers (2023-12-04T18:59:25Z) - Calibrating Panoramic Depth Estimation for Practical Localization and
Mapping [20.621442016969976]
The absolute depth values of surrounding environments provide crucial cues for various assistive technologies, such as localization, navigation, and 3D structure estimation.
We propose that accurate depth estimated from panoramic images can serve as a powerful and light-weight input for a wide range of downstream tasks requiring 3D information.
arXiv Detail & Related papers (2023-08-27T04:50:05Z) - SphereDepth: Panorama Depth Estimation from Spherical Domain [17.98608948955211]
This paper proposes SphereDepth, a novel panorama depth estimation method.
It predicts the depth directly on the spherical mesh without projection preprocessing.
It achieves comparable results with the state-of-the-art methods of panorama depth estimation.
arXiv Detail & Related papers (2022-08-29T16:50:19Z) - 360MonoDepth: High-Resolution 360{\deg} Monocular Depth Estimation [15.65828728205071]
monocular depth estimation remains a challenge for 360deg data.
Current CNN-based methods do not support such high resolutions due to limited GPU memory.
We propose a flexible framework for monocular depth estimation from high-resolution 360deg images using tangent images.
arXiv Detail & Related papers (2021-11-30T18:57:29Z) - Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo [103.08512487830669]
We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
arXiv Detail & Related papers (2021-10-11T20:20:03Z) - Boosting Monocular Depth Estimation Models to High-Resolution via
Content-Adaptive Multi-Resolution Merging [14.279471205248534]
We show how a consistent scene structure and high-frequency details affect depth estimation performance.
We present a double estimation method that improves the whole-image depth estimation and a patch selection method that adds local details.
We demonstrate that by merging estimations at different resolutions with changing context, we can generate multi-megapixel depth maps with a high level of detail.
arXiv Detail & Related papers (2021-05-28T17:55:15Z) - Towards Unpaired Depth Enhancement and Super-Resolution in the Wild [121.96527719530305]
State-of-the-art data-driven methods of depth map super-resolution rely on registered pairs of low- and high-resolution depth maps of the same scenes.
We consider an approach to depth map enhancement based on learning from unpaired data.
arXiv Detail & Related papers (2021-05-25T16:19:16Z) - Fast and Accurate Optical Flow based Depth Map Estimation from Light
Fields [22.116100469958436]
We propose a depth estimation method from light fields based on existing optical flow estimation methods.
The different disparity map estimates that we obtain are very consistent, which allows a fast and simple aggregation step to create a single disparity map.
Since the disparity map estimates are consistent, we can also create a depth map from each disparity estimate, and then aggregate the different depth maps in the 3D space to create a single dense depth map.
arXiv Detail & Related papers (2020-08-11T12:53:31Z) - Depth image denoising using nuclear norm and learning graph model [107.51199787840066]
Group-based image restoration methods are more effective in gathering the similarity among patches.
For each patch, we find and group the most similar patches within a searching window.
The proposed method is superior to other current state-of-the-art denoising methods in both subjective and objective criterion.
arXiv Detail & Related papers (2020-08-09T15:12:16Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.