Progressive Depth Learning for Single Image Dehazing
- URL: http://arxiv.org/abs/2102.10514v1
- Date: Sun, 21 Feb 2021 05:24:18 GMT
- Title: Progressive Depth Learning for Single Image Dehazing
- Authors: Yudong Liang, Bin Wang, Jiaying Liu, Deyu Li, Sanping Zhou and Wenqi
Ren
- Abstract summary: Existing dehazing methods often ignore the depth cues and fail in distant areas where heavier haze disturbs the visibility.
We propose a deep end-to-end model that iteratively estimates image depths and transmission maps.
Our approach benefits from explicitly modeling the inner relationship of image depth and transmission map, which is especially effective for distant hazy areas.
- Score: 56.71963910162241
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The formulation of the hazy image is mainly dominated by the reflected lights
and ambient airlight. Existing dehazing methods often ignore the depth cues and
fail in distant areas where heavier haze disturbs the visibility. However, we
note that the guidance of the depth information for transmission estimation
could remedy the decreased visibility as distances increase. In turn, the good
transmission estimation could facilitate the depth estimation for hazy images.
In this paper, a deep end-to-end model that iteratively estimates image depths
and transmission maps is proposed to perform an effective depth prediction for
hazy images and improve the dehazing performance with the guidance of depth
information. The image depth and transmission map are progressively refined to
better restore the dehazed image. Our approach benefits from explicitly
modeling the inner relationship of image depth and transmission map, which is
especially effective for distant hazy areas. Extensive results on the
benchmarks demonstrate that our proposed network performs favorably against the
state-of-the-art dehazing methods in terms of depth estimation and haze
removal.
Related papers
- Single Image Dehazing Using Scene Depth Ordering [15.929908168136823]
We propose a depth order guided single image dehazing method, which utilizes depth order in hazy images to guide the dehazing process.
The proposed method can better recover potential structure and vivid color with higher computational efficiency than the state-of-the-art dehazing methods.
arXiv Detail & Related papers (2024-08-11T03:29:27Z) - Depth Information Assisted Collaborative Mutual Promotion Network for Single Image Dehazing [9.195173526948123]
We propose a dual-task collaborative mutual promotion framework to achieve the dehazing of a single image.
This framework integrates depth estimation and dehazing by a dual-task interaction mechanism.
We show that the proposed method can achieve better performance than that of the state-of-the-art approaches.
arXiv Detail & Related papers (2024-03-02T06:29:44Z) - SelfPromer: Self-Prompt Dehazing Transformers with Depth-Consistency [51.92434113232977]
This work presents an effective depth-consistency self-prompt Transformer for image dehazing.
It is motivated by an observation that the estimated depths of an image with haze residuals and its clear counterpart vary.
By incorporating the prompt, prompt embedding, and prompt attention into an encoder-decoder network based on VQGAN, we can achieve better perception quality.
arXiv Detail & Related papers (2023-03-13T11:47:24Z) - Unpaired Overwater Image Defogging Using Prior Map Guided CycleGAN [60.257791714663725]
We propose a Prior map Guided CycleGAN (PG-CycleGAN) for defogging of images with overwater scenes.
The proposed method outperforms the state-of-the-art supervised, semi-supervised, and unsupervised defogging approaches.
arXiv Detail & Related papers (2022-12-23T03:00:28Z) - Uncertainty Guided Depth Fusion for Spike Camera [49.41822923588663]
We propose a novel Uncertainty-Guided Depth Fusion (UGDF) framework to fuse predictions of monocular and stereo depth estimation networks for spike camera.
Our framework is motivated by the fact that stereo spike depth estimation achieves better results at close range.
In order to demonstrate the advantage of spike depth estimation over traditional camera depth estimation, we contribute a spike-depth dataset named CitySpike20K.
arXiv Detail & Related papers (2022-08-26T13:04:01Z) - Wild ToFu: Improving Range and Quality of Indirect Time-of-Flight Depth
with RGB Fusion in Challenging Environments [56.306567220448684]
We propose a new learning based end-to-end depth prediction network which takes noisy raw I-ToF signals as well as an RGB image.
We show more than 40% RMSE improvement on the final depth map compared to the baseline approach.
arXiv Detail & Related papers (2021-12-07T15:04:14Z) - Differentiable Diffusion for Dense Depth Estimation from Multi-view
Images [31.941861222005603]
We present a method to estimate dense depth by optimizing a sparse set of points such that their diffusion into a depth map minimizes a multi-view reprojection error from RGB supervision.
We also develop an efficient optimization routine that can simultaneously optimize the 50k+ points required for complex scene reconstruction.
arXiv Detail & Related papers (2021-06-16T16:17:34Z) - SAFENet: Self-Supervised Monocular Depth Estimation with Semantic-Aware
Feature Extraction [27.750031877854717]
We propose SAFENet that is designed to leverage semantic information to overcome the limitations of the photometric loss.
Our key idea is to exploit semantic-aware depth features that integrate the semantic and geometric knowledge.
Experiments on KITTI dataset demonstrate that our methods compete or even outperform the state-of-the-art methods.
arXiv Detail & Related papers (2020-10-06T17:22:25Z) - Adaptive confidence thresholding for monocular depth estimation [83.06265443599521]
We propose a new approach to leverage pseudo ground truth depth maps of stereo images generated from self-supervised stereo matching methods.
The confidence map of the pseudo ground truth depth map is estimated to mitigate performance degeneration by inaccurate pseudo depth maps.
Experimental results demonstrate superior performance to state-of-the-art monocular depth estimation methods.
arXiv Detail & Related papers (2020-09-27T13:26:16Z) - Self-Attention Dense Depth Estimation Network for Unrectified Video
Sequences [6.821598757786515]
LiDAR and radar sensors are the hardware solution for real-time depth estimation.
Deep learning based self-supervised depth estimation methods have shown promising results.
We propose a self-attention based depth and ego-motion network for unrectified images.
arXiv Detail & Related papers (2020-05-28T21:53:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.