Adaptive Weighted Guided Image Filtering for Depth Enhancement in
Shape-From-Focus
- URL: http://arxiv.org/abs/2201.06823v1
- Date: Tue, 18 Jan 2022 08:52:26 GMT
- Title: Adaptive Weighted Guided Image Filtering for Depth Enhancement in
Shape-From-Focus
- Authors: Yuwen Li, Zhengguo Li, Chaobing Zheng and Shiqian Wu
- Abstract summary: Shape from focus (SFF) techniques cannot preserve depth edges and fine structural details from a sequence of multi-focus images.
A novel depth enhancement algorithm for the SFF based on an adaptive weighted guided image filtering (AWGIF) is proposed.
- Score: 28.82811159799952
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing shape from focus (SFF) techniques cannot preserve depth edges and
fine structural details from a sequence of multi-focus images. Moreover, noise
in the sequence of multi-focus images affects the accuracy of the depth map. In
this paper, a novel depth enhancement algorithm for the SFF based on an
adaptive weighted guided image filtering (AWGIF) is proposed to address the
above issues. The AWGIF is applied to decompose an initial depth map which is
estimated by the traditional SFF into a base layer and a detail layer. In order
to preserve the edges accurately in the refined depth map, the guidance image
is constructed from the multi-focus image sequence, and the coefficient of the
AWGIF is utilized to suppress the noise while enhancing the fine depth details.
Experiments on real and synthetic objects demonstrate the superiority of the
proposed algorithm in terms of anti-noise, and the ability to preserve depth
edges and fine structural details compared to existing methods.
Related papers
- Depth-guided Texture Diffusion for Image Semantic Segmentation [47.46257473475867]
We introduce a Depth-guided Texture Diffusion approach that effectively tackles the outlined challenge.
Our method extracts low-level features from edges and textures to create a texture image.
By integrating this enriched depth map with the original RGB image into a joint feature embedding, our method effectively bridges the disparity between the depth map and the image.
arXiv Detail & Related papers (2024-08-17T04:55:03Z) - Deep Phase Coded Image Prior [34.84063452418995]
Phase-coded imaging is a method to tackle tasks such as passive depth estimation and extended depth of field.
Most of the current deep learning-based methods for depth estimation or all-in-focus imaging require a training dataset with high-quality depth maps.
We propose a new method named "Deep Phase Coded Image Prior" (DPCIP) for jointly recovering the depth map and all-in-focus image.
arXiv Detail & Related papers (2024-04-05T05:58:40Z) - The Devil is in the Edges: Monocular Depth Estimation with Edge-aware Consistency Fusion [30.03608191629917]
This paper presents a novel monocular depth estimation method, named ECFNet, for estimating high-quality monocular depth with clear edges and valid overall structure from a single RGB image.
We make a thorough inquiry about the key factor that affects the edge depth estimation of the MDE networks, and come to a ratiocination that the edge information itself plays a critical role in predicting depth details.
arXiv Detail & Related papers (2024-03-30T13:58:19Z) - Fully Self-Supervised Depth Estimation from Defocus Clue [79.63579768496159]
We propose a self-supervised framework that estimates depth purely from a sparse focal stack.
We show that our framework circumvents the needs for the depth and AIF image ground-truth, and receives superior predictions.
arXiv Detail & Related papers (2023-03-19T19:59:48Z) - End-to-end Learning for Joint Depth and Image Reconstruction from
Diffracted Rotation [10.896567381206715]
We propose a novel end-to-end learning approach for depth from diffracted rotation.
Our approach requires a significantly less complex model and less training data, yet it is superior to existing methods in the task of monocular depth estimation.
arXiv Detail & Related papers (2022-04-14T16:14:37Z) - Wild ToFu: Improving Range and Quality of Indirect Time-of-Flight Depth
with RGB Fusion in Challenging Environments [56.306567220448684]
We propose a new learning based end-to-end depth prediction network which takes noisy raw I-ToF signals as well as an RGB image.
We show more than 40% RMSE improvement on the final depth map compared to the baseline approach.
arXiv Detail & Related papers (2021-12-07T15:04:14Z) - Progressive Depth Learning for Single Image Dehazing [56.71963910162241]
Existing dehazing methods often ignore the depth cues and fail in distant areas where heavier haze disturbs the visibility.
We propose a deep end-to-end model that iteratively estimates image depths and transmission maps.
Our approach benefits from explicitly modeling the inner relationship of image depth and transmission map, which is especially effective for distant hazy areas.
arXiv Detail & Related papers (2021-02-21T05:24:18Z) - Robust Consistent Video Depth Estimation [65.53308117778361]
We present an algorithm for estimating consistent dense depth maps and camera poses from a monocular video.
Our algorithm combines two complementary techniques: (1) flexible deformation-splines for low-frequency large-scale alignment and (2) geometry-aware depth filtering for high-frequency alignment of fine depth details.
In contrast to prior approaches, our method does not require camera poses as input and achieves robust reconstruction for challenging hand-held cell phone captures containing a significant amount of noise, shake, motion blur, and rolling shutter deformations.
arXiv Detail & Related papers (2020-12-10T18:59:48Z) - Depth Completion Using a View-constrained Deep Prior [73.21559000917554]
Recent work has shown that the structure of convolutional neural networks (CNNs) induces a strong prior that favors natural images.
This prior, known as a deep image prior (DIP), is an effective regularizer in inverse problems such as image denoising and inpainting.
We extend the concept of the DIP to depth images. Given color images and noisy and incomplete target depth maps, we reconstruct a depth map restored by virtue of using the CNN network structure as a prior.
arXiv Detail & Related papers (2020-01-21T21:56:01Z) - Learning Wavefront Coding for Extended Depth of Field Imaging [4.199844472131922]
Extended depth of field (EDoF) imaging is a challenging ill-posed problem.
We propose a computational imaging approach for EDoF, where we employ wavefront coding via a diffractive optical element.
We demonstrate results with minimal artifacts in various scenarios, including deep 3D scenes and broadband imaging.
arXiv Detail & Related papers (2019-12-31T17:00:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.