Central Angle Optimization for 360-degree Holographic 3D Content
- URL: http://arxiv.org/abs/2311.05878v1
- Date: Fri, 10 Nov 2023 05:30:43 GMT
- Title: Central Angle Optimization for 360-degree Holographic 3D Content
- Authors: Hakdong Kim, Minsung Yoon, and Cheongwon Kim
- Abstract summary: In this study, we propose a method to find an optimal central angle in deep learning-based depth map estimation.
We experimentally demonstrate and discuss the relationship between the central angle and the quality of digital holographic content.
- Score: 3.072340427031969
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this study, we propose a method to find an optimal central angle in deep
learning-based depth map estimation used to produce realistic holographic
content. The acquisition of RGB-depth map images as detailed as possible must
be performed to generate holograms of high quality, despite the high
computational cost. Therefore, we introduce a novel pipeline designed to
analyze various values of central angles between adjacent camera viewpoints
equidistant from the origin of an object-centered environment. Then we propose
the optimal central angle to generate high-quality holographic content. The
proposed pipeline comprises key steps such as comparing estimated depth maps
and comparing reconstructed CGHs (Computer-Generated Holograms) from RGB images
and estimated depth maps. We experimentally demonstrate and discuss the
relationship between the central angle and the quality of digital holographic
content.
Related papers
- Adaptive Stereo Depth Estimation with Multi-Spectral Images Across All Lighting Conditions [58.88917836512819]
We propose a novel framework incorporating stereo depth estimation to enforce accurate geometric constraints.
To mitigate the effects of poor lighting on stereo matching, we introduce Degradation Masking.
Our method achieves state-of-the-art (SOTA) performance on the Multi-Spectral Stereo (MS2) dataset.
arXiv Detail & Related papers (2024-11-06T03:30:46Z) - Decoupling Fine Detail and Global Geometry for Compressed Depth Map Super-Resolution [55.9977636042469]
Bit-depth compression produces a uniform depth representation in regions with subtle variations, hindering the recovery of detailed information.
densely distributed random noise reduces the accuracy of estimating the global geometric structure of the scene.
We propose a novel framework, termed geometry-decoupled network (GDNet), for compressed depth map super-resolution.
arXiv Detail & Related papers (2024-11-05T16:37:30Z) - Refinement of Monocular Depth Maps via Multi-View Differentiable Rendering [4.717325308876748]
We present a novel approach to generate view consistent and detailed depth maps from a number of posed images.
We leverage advances in monocular depth estimation, which generate topologically complete, but metrically inaccurate depth maps.
Our method is able to generate dense, detailed, high-quality depth maps, also in challenging indoor scenarios, and outperforms state-of-the-art depth reconstruction approaches.
arXiv Detail & Related papers (2024-10-04T18:50:28Z) - Depth-guided Texture Diffusion for Image Semantic Segmentation [47.46257473475867]
We introduce a Depth-guided Texture Diffusion approach that effectively tackles the outlined challenge.
Our method extracts low-level features from edges and textures to create a texture image.
By integrating this enriched depth map with the original RGB image into a joint feature embedding, our method effectively bridges the disparity between the depth map and the image.
arXiv Detail & Related papers (2024-08-17T04:55:03Z) - Multi-Camera Collaborative Depth Prediction via Consistent Structure
Estimation [75.99435808648784]
We propose a novel multi-camera collaborative depth prediction method.
It does not require large overlapping areas while maintaining structure consistency between cameras.
Experimental results on DDAD and NuScenes datasets demonstrate the superior performance of our method.
arXiv Detail & Related papers (2022-10-05T03:44:34Z) - Depth-SIMS: Semi-Parametric Image and Depth Synthesis [23.700034054124604]
We present a method that generates RGB canvases with well aligned segmentation maps and sparse depth maps, coupled with an in-painting network that transforms the RGB canvases into high quality RGB images.
We benchmark our method in terms of structural alignment and image quality, showing an increase in mIoU over SOTA by 3.7 percentage points and a highly competitive FID.
We analyse the quality of the generated data as training data for semantic segmentation and depth completion, and show that our approach is more suited for this purpose than other methods.
arXiv Detail & Related papers (2022-03-07T13:58:32Z) - Dense Depth Estimation from Multiple 360-degree Images Using Virtual
Depth [4.984601297028257]
The proposed pipeline leverages a spherical camera model that compensates for radial distortion in 360degree: images.
We propose an effective dense depth estimation method by setting virtual depth and minimizing photonic reprojection error.
The experimental results verify that the proposed pipeline improves estimation accuracy compared to the current state-of-art dense depth estimation methods.
arXiv Detail & Related papers (2021-12-30T05:27:28Z) - Deep Learning-based High-precision Depth Map Estimation from Missing
Viewpoints for 360 Degree Digital Holography [2.174116094271494]
We propose a novel, convolutional neural network model to extract highly precise depth maps from missing viewpoints.
The proposed model called the HDD Net uses MSE for the better performance of depth map estimation as loss function.
We demonstrate the experimental results to test the quality of estimated depth maps through directly reconstructing holographic 3D image scenes.
arXiv Detail & Related papers (2021-03-09T00:38:23Z) - Robust Consistent Video Depth Estimation [65.53308117778361]
We present an algorithm for estimating consistent dense depth maps and camera poses from a monocular video.
Our algorithm combines two complementary techniques: (1) flexible deformation-splines for low-frequency large-scale alignment and (2) geometry-aware depth filtering for high-frequency alignment of fine depth details.
In contrast to prior approaches, our method does not require camera poses as input and achieves robust reconstruction for challenging hand-held cell phone captures containing a significant amount of noise, shake, motion blur, and rolling shutter deformations.
arXiv Detail & Related papers (2020-12-10T18:59:48Z) - A Method of Generating Measurable Panoramic Image for Indoor Mobile
Measurement System [36.47697710426005]
This paper designs a technique route to generate high-quality panoramic image with depth information.
For the fusion of 3D points and image data, we adopt a parameter self-adaptive framework to produce 2D dense depth map.
For image stitching, optimal seamline for the overlapping area is searched using a graph-cuts-based method.
arXiv Detail & Related papers (2020-10-27T13:12:02Z) - Depth Completion Using a View-constrained Deep Prior [73.21559000917554]
Recent work has shown that the structure of convolutional neural networks (CNNs) induces a strong prior that favors natural images.
This prior, known as a deep image prior (DIP), is an effective regularizer in inverse problems such as image denoising and inpainting.
We extend the concept of the DIP to depth images. Given color images and noisy and incomplete target depth maps, we reconstruct a depth map restored by virtue of using the CNN network structure as a prior.
arXiv Detail & Related papers (2020-01-21T21:56:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.