Perceiving and Modeling Density is All You Need for Image Dehazing
- URL: http://arxiv.org/abs/2111.09733v1
- Date: Thu, 18 Nov 2021 14:47:41 GMT
- Title: Perceiving and Modeling Density is All You Need for Image Dehazing
- Authors: Tian Ye, Mingchao Jiang, Yunchen Zhang, Liang Chen, Erkang Chen, Pen
Chen, Zhiyong Lu
- Abstract summary: In the real world, the degradation of images taken under haze can be quite complex.
Recent methods adopt deep neural networks to recover clean scenes from hazy images directly.
We propose to solve the problem of modeling real-world haze degradation by perceiving and modeling density for uneven haze distribution.
- Score: 10.864016211811025
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In the real world, the degradation of images taken under haze can be quite
complex, where the spatial distribution of haze is varied from image to image.
Recent methods adopt deep neural networks to recover clean scenes from hazy
images directly. However, due to the paradox caused by the variation of real
captured haze and the fixed degradation parameters of the current networks, the
generalization ability of recent dehazing methods on real-world hazy images is
not ideal.To address the problem of modeling real-world haze degradation, we
propose to solve this problem by perceiving and modeling density for uneven
haze distribution. We propose a novel Separable Hybrid Attention (SHA) module
to encode haze density by capturing features in the orthogonal directions to
achieve this goal. Moreover, a density map is proposed to model the uneven
distribution of the haze explicitly. The density map generates positional
encoding in a semi-supervised way. Such a haze density perceiving and modeling
capture the unevenly distributed degeneration at the feature level effectively.
Through a suitable combination of SHA and density map, we design a novel
dehazing network architecture, which achieves a good complexity-performance
trade-off. The extensive experiments on two large-scale datasets demonstrate
that our method surpasses all state-of-the-art approaches by a large margin
both quantitatively and qualitatively, boosting the best published PSNR metric
from 28.53 dB to 33.49 dB on the Haze4k test dataset and from 37.17 dB to 38.41
dB on the SOTS indoor test dataset.
Related papers
- LMHaze: Intensity-aware Image Dehazing with a Large-scale Multi-intensity Real Haze Dataset [14.141433473509826]
We present LMHaze, a large-scale, high-quality real-world dataset.
LMHaze comprises paired hazy and haze-free images captured in diverse indoor and outdoor environments.
To better handle images with different haze intensities, we propose a mixture-of-experts model based on Mamba.
arXiv Detail & Related papers (2024-10-21T15:20:02Z) - Contrasting Deepfakes Diffusion via Contrastive Learning and Global-Local Similarities [88.398085358514]
Contrastive Deepfake Embeddings (CoDE) is a novel embedding space specifically designed for deepfake detection.
CoDE is trained via contrastive learning by additionally enforcing global-local similarities.
arXiv Detail & Related papers (2024-07-29T18:00:10Z) - Latent Consistency Models: Synthesizing High-Resolution Images with
Few-Step Inference [60.32804641276217]
We propose Latent Consistency Models (LCMs), enabling swift inference with minimal steps on any pre-trained LDMs.
A high-quality 768 x 768 24-step LCM takes only 32 A100 GPU hours for training.
We also introduce Latent Consistency Fine-tuning (LCF), a novel method that is tailored for fine-tuning LCMs on customized image datasets.
arXiv Detail & Related papers (2023-10-06T17:11:58Z) - Hierarchical Integration Diffusion Model for Realistic Image Deblurring [71.76410266003917]
Diffusion models (DMs) have been introduced in image deblurring and exhibited promising performance.
We propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring.
Experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T12:18:20Z) - Single-View Height Estimation with Conditional Diffusion Probabilistic
Models [1.8782750537161614]
We train a generative diffusion model to learn the joint distribution of optical and DSM images as a Markov chain.
This is accomplished by minimizing a denoising score matching objective while being conditioned on the source image to generate realistic high resolution 3D surfaces.
In this paper we experiment with conditional denoising diffusion probabilistic models (DDPM) for height estimation from a single remotely sensed image.
arXiv Detail & Related papers (2023-04-26T00:37:05Z) - Dual-Scale Single Image Dehazing Via Neural Augmentation [29.019279446792623]
A novel single image dehazing algorithm is introduced by combining model-based and data-driven approaches.
Results indicate that the proposed algorithm can remove haze well from real-world and synthetic hazy images.
arXiv Detail & Related papers (2022-09-13T11:56:03Z) - PixelPyramids: Exact Inference Models from Lossless Image Pyramids [58.949070311990916]
Pixel-Pyramids is a block-autoregressive approach with scale-specific representations to encode the joint distribution of image pixels.
It yields state-of-the-art results for density estimation on various image datasets, especially for high-resolution data.
For CelebA-HQ 1024 x 1024, we observe that the density estimates are improved to 44% of the baseline despite sampling speeds superior even to easily parallelizable flow-based models.
arXiv Detail & Related papers (2021-10-17T10:47:29Z) - From Synthetic to Real: Image Dehazing Collaborating with Unlabeled Real
Data [58.50411487497146]
We propose a novel image dehazing framework collaborating with unlabeled real data.
First, we develop a disentangled image dehazing network (DID-Net), which disentangles the feature representations into three component maps.
Then a disentangled-consistency mean-teacher network (DMT-Net) is employed to collaborate unlabeled real data for boosting single image dehazing.
arXiv Detail & Related papers (2021-08-06T04:00:28Z) - Non-Homogeneous Haze Removal via Artificial Scene Prior and
Bidimensional Graph Reasoning [52.07698484363237]
We propose a Non-Homogeneous Haze Removal Network (NHRN) via artificial scene prior and bidimensional graph reasoning.
Our method achieves superior performance over many state-of-the-art algorithms for both the single image dehazing and hazy image understanding tasks.
arXiv Detail & Related papers (2021-04-05T13:04:44Z) - Advanced Multiple Linear Regression Based Dark Channel Prior Applied on
Dehazing Image and Generating Synthetic Haze [0.6875312133832078]
Authors propose a multiple linear regression haze removal model based on a widely adopted dehazing algorithm named Dark Channel Prior.
To increase object detection accuracy in the hazy environment, the authors present an algorithm to build a synthetic hazy COCO training dataset.
arXiv Detail & Related papers (2021-03-12T03:32:08Z) - Dehaze-GLCGAN: Unpaired Single Image De-hazing via Adversarial Training [3.5788754401889014]
We propose a dehazing Global-Local Cycle-consistent Generative Adversarial Network (Dehaze-GLCGAN) for single image de-hazing.
Our experiments over three benchmark datasets show that our network outperforms previous work in terms of PSNR and SSIM.
arXiv Detail & Related papers (2020-08-15T02:43:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.