Non-Homogeneous Haze Removal via Artificial Scene Prior and
Bidimensional Graph Reasoning
- URL: http://arxiv.org/abs/2104.01888v1
- Date: Mon, 5 Apr 2021 13:04:44 GMT
- Title: Non-Homogeneous Haze Removal via Artificial Scene Prior and
Bidimensional Graph Reasoning
- Authors: Haoran Wei, Qingbo Wu, Hui Li, King Ngi Ngan, Hongliang Li, Fanman
Meng, and Linfeng Xu
- Abstract summary: We propose a Non-Homogeneous Haze Removal Network (NHRN) via artificial scene prior and bidimensional graph reasoning.
Our method achieves superior performance over many state-of-the-art algorithms for both the single image dehazing and hazy image understanding tasks.
- Score: 52.07698484363237
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Due to the lack of natural scene and haze prior information, it is greatly
challenging to completely remove the haze from single image without distorting
its visual content. Fortunately, the real-world haze usually presents
non-homogeneous distribution, which provides us with many valuable clues in
partial well-preserved regions. In this paper, we propose a Non-Homogeneous
Haze Removal Network (NHRN) via artificial scene prior and bidimensional graph
reasoning. Firstly, we employ the gamma correction iteratively to simulate
artificial multiple shots under different exposure conditions, whose haze
degrees are different and enrich the underlying scene prior. Secondly, beyond
utilizing the local neighboring relationship, we build a bidimensional graph
reasoning module to conduct non-local filtering in the spatial and channel
dimensions of feature maps, which models their long-range dependency and
propagates the natural scene prior between the well-preserved nodes and the
nodes contaminated by haze. We evaluate our method on different benchmark
datasets. The results demonstrate that our method achieves superior performance
over many state-of-the-art algorithms for both the single image dehazing and
hazy image understanding tasks.
Related papers
- Taming Latent Diffusion Model for Neural Radiance Field Inpainting [63.297262813285265]
Neural Radiance Field (NeRF) is a representation for 3D reconstruction from multi-view images.
We propose tempering the diffusion model'sity with per-scene customization and mitigating the textural shift with masked training.
Our framework yields state-of-the-art NeRF inpainting results on various real-world scenes.
arXiv Detail & Related papers (2024-04-15T17:59:57Z) - DHFormer: A Vision Transformer-Based Attention Module for Image Dehazing [0.0]
Images acquired in hazy conditions have degradations induced in them.
Prior-based and learning-based approaches have been proposed to mitigate the effect of haze and generate haze-free images.
In this paper, a method that uses residual learning and vision transformers in an attention module is proposed.
arXiv Detail & Related papers (2023-12-15T17:05:32Z) - Diffusion Posterior Illumination for Ambiguity-aware Inverse Rendering [63.24476194987721]
Inverse rendering, the process of inferring scene properties from images, is a challenging inverse problem.
Most existing solutions incorporate priors into the inverse-rendering pipeline to encourage plausible solutions.
We propose a novel scheme that integrates a denoising probabilistic diffusion model pre-trained on natural illumination maps into an optimization framework.
arXiv Detail & Related papers (2023-09-30T12:39:28Z) - Gradpaint: Gradient-Guided Inpainting with Diffusion Models [71.47496445507862]
Denoising Diffusion Probabilistic Models (DDPMs) have recently achieved remarkable results in conditional and unconditional image generation.
We present GradPaint, which steers the generation towards a globally coherent image.
We generalizes well to diffusion models trained on various datasets, improving upon current state-of-the-art supervised and unsupervised methods.
arXiv Detail & Related papers (2023-09-18T09:36:24Z) - Aerial Diffusion: Text Guided Ground-to-Aerial View Translation from a
Single Image using Diffusion Models [72.76182801289497]
We present a novel method, Aerial Diffusion, for generating aerial views from a single ground-view image using text guidance.
We address two main challenges corresponding to domain gap between the ground-view and the aerial view.
Aerial Diffusion is the first approach that performs ground-to-aerial translation in an unsupervised manner.
arXiv Detail & Related papers (2023-03-15T22:26:09Z) - Dual-Scale Single Image Dehazing Via Neural Augmentation [29.019279446792623]
A novel single image dehazing algorithm is introduced by combining model-based and data-driven approaches.
Results indicate that the proposed algorithm can remove haze well from real-world and synthetic hazy images.
arXiv Detail & Related papers (2022-09-13T11:56:03Z) - Unsupervised Neural Rendering for Image Hazing [31.108654945661705]
Image hazing aims to render a hazy image from a given clean one, which could be applied to a variety of practical applications such as gaming, filming, photographic filtering, and image dehazing.
We propose a neural rendering method for image hazing, dubbed as HazeGEN. To be specific, HazeGEN is a knowledge-driven neural network which estimates the transmission map by leveraging a new prior.
To adaptively learn the airlight, we build a neural module based on another new prior, i.e., the rendered hazy image and the exemplar are similar in the airlight distribution.
arXiv Detail & Related papers (2021-07-14T13:15:14Z) - NH-HAZE: An Image Dehazing Benchmark with Non-Homogeneous Hazy and
Haze-Free Images [95.00583228823446]
NH-HAZE is a non-homogeneous realistic dataset with pairs of real hazy and corresponding haze-free images.
This work presents an objective assessment of several state-of-the-art single image dehazing methods that were evaluated using NH-HAZE dataset.
arXiv Detail & Related papers (2020-05-07T15:50:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.