Fully Non-Homogeneous Atmospheric Scattering Modeling with Convolutional
Neural Networks for Single Image Dehazing
- URL: http://arxiv.org/abs/2108.11292v1
- Date: Wed, 25 Aug 2021 15:27:44 GMT
- Title: Fully Non-Homogeneous Atmospheric Scattering Modeling with Convolutional
Neural Networks for Single Image Dehazing
- Authors: Cong Wang, Yan Huang, Yuexian Zou and Yong Xu
- Abstract summary: Single image dehazing models (SIDM) based on atmospheric scattering model (ASM) have achieved remarkable results.
In this study, a new fully non-homogeneous atmospheric scattering model (FNH-ASM) is proposed for well modeling the hazy images.
Two new cost sensitive loss functions, beta-Loss and D-Loss, are innovatively developed for limiting the parameter bias of sensitive positions.
- Score: 42.20480089840438
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In recent years, single image dehazing models (SIDM) based on atmospheric
scattering model (ASM) have achieved remarkable results. However, it is noted
that ASM-based SIDM degrades its performance in dehazing real world hazy images
due to the limited modelling ability of ASM where the atmospheric light factor
(ALF) and the angular scattering coefficient (ASC) are assumed as constants for
one image. Obviously, the hazy images taken in real world cannot always satisfy
this assumption. Such generating modelling mismatch between the real-world
images and ASM sets up the upper bound of trained ASM-based SIDM for dehazing.
Bearing this in mind, in this study, a new fully non-homogeneous atmospheric
scattering model (FNH-ASM) is proposed for well modeling the hazy images under
complex conditions where ALF and ASC are pixel dependent. However, FNH-ASM
brings difficulty in practical application. In FNH-ASM based SIDM, the
estimation bias of parameters at different positions lead to different
distortion of dehazing result. Hence, in order to reduce the influence of
parameter estimation bias on dehazing results, two new cost sensitive loss
functions, beta-Loss and D-Loss, are innovatively developed for limiting the
parameter bias of sensitive positions that have a greater impact on the
dehazing result. In the end, based on FNH-ASM, an end-to-end CNN-based dehazing
network, FNHD-Net, is developed, which applies beta-Loss and D-Loss.
Experimental results demonstrate the effectiveness and superiority of our
proposed FNHD-Net for dehazing on both synthetic and real-world images. And the
performance improvement of our method increases more obviously in dense and
heterogeneous haze scenes.
Related papers
- HAD: Hierarchical Asymmetric Distillation to Bridge Spatio-Temporal Gaps in Event-Based Object Tracking [80.07224739976911]
Event cameras offer exceptional temporal resolution and a range (modal)<n> RGB cameras excel at capturing rich texture with high resolution, whereas event cameras offer exceptional temporal resolution and a range (modal)
arXiv Detail & Related papers (2025-10-22T13:15:13Z) - Unlocking the Potential of Diffusion Priors in Blind Face Restoration [63.419272650578165]
In this work, we use a unified network FLIPNET that switches between two modes to resolve specific gaps.<n>In Restoration mode, the model gradually integrates BFR-oriented features and face embeddings from LQ images to achieve authentic and faithful face restoration.<n>In Degradation mode, the model synthesizes real-world like degraded images based on the knowledge learned from real-world degradation datasets.
arXiv Detail & Related papers (2025-08-12T01:50:55Z) - DehazeMamba: SAR-guided Optical Remote Sensing Image Dehazing with Adaptive State Space Model [27.83437788159158]
We introduce DehazeMamba, a novel SAR-guided dehazing network built on a progressive haze decoupling fusion strategy.
Our approach incorporates two key innovations: a Haze Perception and Decoupling Module (HPDM) that dynamically identifies haze-affected regions through optical-SAR difference analysis, and a Progressive Fusion Module (PFM) that mitigates domain shift through a two-stage fusion process based on feature quality assessment.
Extensive experiments demonstrate that DehazeMamba significantly outperforms state-of-the-art methods, achieving a 0.73 dB improvement in PSNR and substantial enhancements in downstream tasks such as
arXiv Detail & Related papers (2025-03-17T11:25:05Z) - Physics-Inspired Degradation Models for Hyperspectral Image Fusion [61.743696362028246]
Most fusion methods solely focus on the fusion algorithm itself and overlook the degradation models.
We propose physics-inspired degradation models (PIDM) to model the degradation of LR-HSI and HR-MSI.
Our proposed PIDM can boost the fusion performance of existing fusion methods in practical scenarios.
arXiv Detail & Related papers (2024-02-04T09:07:28Z) - Hierarchical Integration Diffusion Model for Realistic Image Deblurring [71.76410266003917]
Diffusion models (DMs) have been introduced in image deblurring and exhibited promising performance.
We propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring.
Experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T12:18:20Z) - Diffusion Probabilistic Model Made Slim [128.2227518929644]
We introduce a customized design for slim diffusion probabilistic models (DPM) for light-weight image synthesis.
We achieve 8-18x computational complexity reduction as compared to the latent diffusion models on a series of conditional and unconditional image generation tasks.
arXiv Detail & Related papers (2022-11-27T16:27:28Z) - AT-DDPM: Restoring Faces degraded by Atmospheric Turbulence using
Denoising Diffusion Probabilistic Models [64.24948495708337]
Atmospheric turbulence causes significant degradation to image quality by introducing blur and geometric distortion.
Various deep learning-based single image atmospheric turbulence mitigation methods, including CNN-based and GAN inversion-based, have been proposed.
Denoising Diffusion Probabilistic Models (DDPMs) have recently gained some traction because of their stable training process and their ability to generate high quality images.
arXiv Detail & Related papers (2022-08-24T03:13:04Z) - Towards a Unified Approach to Single Image Deraining and Dehazing [16.383099109400156]
We develop a new physical model for the rain effect and show that the well-known atmosphere scattering model (ASM) for the haze effect naturally emerges as its homogeneous continuous limit.
We also propose a Densely Scale-Connected Attentive Network (DSCAN) that is suitable for both deraining and dehazing tasks.
arXiv Detail & Related papers (2021-03-26T01:35:43Z) - A GAN-Based Input-Size Flexibility Model for Single Image Dehazing [16.83211957781034]
This paper concentrates on the challenging task of single image dehazing.
We design a novel model to directly generate the haze-free image.
Considering this reason and various image sizes, we propose a novel input-size flexibility conditional generative adversarial network (cGAN) for single image dehazing.
arXiv Detail & Related papers (2021-02-19T08:27:17Z) - FWB-Net:Front White Balance Network for Color Shift Correction in Single
Image Dehazing via Atmospheric Light Estimation [42.20480089840438]
Non-homogeneous atmospheric scattering model (NH-ASM) is proposed for improving image modeling of hazy images.
New U-Net based front white balance module (FWB-Module) is dedicatedly designed to correct color shift.
End-to-end CNN-based color-shift-restraining dehazing network is developed, termed as FWB-Net.
arXiv Detail & Related papers (2021-01-21T06:53:44Z) - FD-GAN: Generative Adversarial Networks with Fusion-discriminator for
Single Image Dehazing [48.65974971543703]
We propose a fully end-to-end Generative Adversarial Networks with Fusion-discriminator (FD-GAN) for image dehazing.
Our model can generator more natural and realistic dehazed images with less color distortion and fewer artifacts.
Experiments have shown that our method reaches state-of-the-art performance on both public synthetic datasets and real-world images.
arXiv Detail & Related papers (2020-01-20T04:36:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.