FWB-Net:Front White Balance Network for Color Shift Correction in Single
Image Dehazing via Atmospheric Light Estimation
- URL: http://arxiv.org/abs/2101.08465v1
- Date: Thu, 21 Jan 2021 06:53:44 GMT
- Title: FWB-Net:Front White Balance Network for Color Shift Correction in Single
Image Dehazing via Atmospheric Light Estimation
- Authors: Cong Wang, Yan Huang, Yuexian Zou, Yong Xu
- Abstract summary: Non-homogeneous atmospheric scattering model (NH-ASM) is proposed for improving image modeling of hazy images.
New U-Net based front white balance module (FWB-Module) is dedicatedly designed to correct color shift.
End-to-end CNN-based color-shift-restraining dehazing network is developed, termed as FWB-Net.
- Score: 42.20480089840438
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, single image dehazing deep models based on Atmospheric
Scattering Model (ASM) have achieved remarkable results. But the dehazing
outputs of those models suffer from color shift. Analyzing the ASM model shows
that the atmospheric light factor (ALF) is set as a scalar which indicates ALF
is constant for whole image. However, for images taken in real-world, the
illumination is not uniformly distributed over whole image which brings model
mismatch and possibly results in color shift of the deep models using ASM.
Bearing this in mind, in this study, first, a new non-homogeneous atmospheric
scattering model (NH-ASM) is proposed for improving image modeling of hazy
images taken under complex illumination conditions. Second, a new U-Net based
front white balance module (FWB-Module) is dedicatedly designed to correct
color shift before generating dehazing result via atmospheric light estimation.
Third, a new FWB loss is innovatively developed for training FWB-Module, which
imposes penalty on color shift. In the end, based on NH-ASM and front white
balance technology, an end-to-end CNN-based color-shift-restraining dehazing
network is developed, termed as FWB-Net. Experimental results demonstrate the
effectiveness and superiority of our proposed FWB-Net for dehazing on both
synthetic and real-world images.
Related papers
- From Cheap to Pro: A Learning-based Adaptive Camera Parameter Network for Professional-Style Imaging [0.07829352305480283]
ACamera-Net is a lightweight and scene-adaptive camera parameter adjustment network.<n>It predicts optimal exposure and white balance from RAW inputs.<n>It consistently enhances image quality and stabilizes perception outputs.
arXiv Detail & Related papers (2025-10-23T13:35:17Z) - Enhancing Infrared Vision: Progressive Prompt Fusion Network and Benchmark [58.61079960074608]
Existing infrared image enhancement methods focus on tackling individual degradations.<n>All-in-one enhancement methods, commonly applied to RGB sensors, often demonstrate limited effectiveness.
arXiv Detail & Related papers (2025-10-10T12:55:54Z) - Unlocking the Potential of Diffusion Priors in Blind Face Restoration [63.419272650578165]
In this work, we use a unified network FLIPNET that switches between two modes to resolve specific gaps.<n>In Restoration mode, the model gradually integrates BFR-oriented features and face embeddings from LQ images to achieve authentic and faithful face restoration.<n>In Degradation mode, the model synthesizes real-world like degraded images based on the knowledge learned from real-world degradation datasets.
arXiv Detail & Related papers (2025-08-12T01:50:55Z) - SAR to Optical Image Translation with Color Supervised Diffusion Model [5.234109158596138]
This paper introduces an innovative generative model designed to transform SAR images into more intelligible optical images.
We employ SAR images as conditional guides in the sampling process and integrate color supervision to counteract color shift issues.
arXiv Detail & Related papers (2024-07-24T01:11:28Z) - Dual High-Order Total Variation Model for Underwater Image Restoration [13.789310785350484]
Underwater image enhancement and restoration (UIER) is one crucial mode to improve the visual quality of underwater images.
We propose an effective variational framework based on an extended underwater image formation model (UIFM)
In our proposed framework, the weight factors-based color compensation is combined with the color balance to compensate for the attenuated color channels and remove the color cast.
arXiv Detail & Related papers (2024-07-20T13:06:37Z) - Distilling Semantic Priors from SAM to Efficient Image Restoration Models [80.83077145948863]
In image restoration (IR), leveraging semantic priors from segmentation models has been a common approach to improve performance.
Recent segment anything model (SAM) has emerged as a powerful tool for extracting advanced semantic priors to enhance IR tasks.
We propose a general framework to distill SAM's semantic knowledge to boost exiting IR models without interfering with their inference process.
arXiv Detail & Related papers (2024-03-25T02:17:20Z) - Physics-Driven Turbulence Image Restoration with Stochastic Refinement [80.79900297089176]
Image distortion by atmospheric turbulence is a critical problem in long-range optical imaging systems.
Fast and physics-grounded simulation tools have been introduced to help the deep-learning models adapt to real-world turbulence conditions.
This paper proposes the Physics-integrated Restoration Network (PiRN) to help the network to disentangle theity from the degradation and the underlying image.
arXiv Detail & Related papers (2023-07-20T05:49:21Z) - Relightify: Relightable 3D Faces from a Single Image via Diffusion
Models [86.3927548091627]
We present the first approach to use diffusion models as a prior for highly accurate 3D facial BRDF reconstruction from a single image.
In contrast to existing methods, we directly acquire the observed texture from the input image, thus, resulting in more faithful and consistent estimation.
arXiv Detail & Related papers (2023-05-10T11:57:49Z) - Auto White-Balance Correction for Mixed-Illuminant Scenes [52.641704254001844]
Auto white balance (AWB) is applied by camera hardware to remove color cast caused by scene illumination.
This paper presents an effective AWB method to deal with such mixed-illuminant scenes.
Our method does not require illuminant estimation, as is the case in traditional camera AWB modules.
arXiv Detail & Related papers (2021-09-17T20:13:31Z) - Fully Non-Homogeneous Atmospheric Scattering Modeling with Convolutional
Neural Networks for Single Image Dehazing [42.20480089840438]
Single image dehazing models (SIDM) based on atmospheric scattering model (ASM) have achieved remarkable results.
In this study, a new fully non-homogeneous atmospheric scattering model (FNH-ASM) is proposed for well modeling the hazy images.
Two new cost sensitive loss functions, beta-Loss and D-Loss, are innovatively developed for limiting the parameter bias of sensitive positions.
arXiv Detail & Related papers (2021-08-25T15:27:44Z) - A GAN-Based Input-Size Flexibility Model for Single Image Dehazing [16.83211957781034]
This paper concentrates on the challenging task of single image dehazing.
We design a novel model to directly generate the haze-free image.
Considering this reason and various image sizes, we propose a novel input-size flexibility conditional generative adversarial network (cGAN) for single image dehazing.
arXiv Detail & Related papers (2021-02-19T08:27:17Z) - FD-GAN: Generative Adversarial Networks with Fusion-discriminator for
Single Image Dehazing [48.65974971543703]
We propose a fully end-to-end Generative Adversarial Networks with Fusion-discriminator (FD-GAN) for image dehazing.
Our model can generator more natural and realistic dehazed images with less color distortion and fewer artifacts.
Experiments have shown that our method reaches state-of-the-art performance on both public synthetic datasets and real-world images.
arXiv Detail & Related papers (2020-01-20T04:36:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.