Enhancing Visibility in Nighttime Haze Images Using Guided APSF and
Gradient Adaptive Convolution
- URL: http://arxiv.org/abs/2308.01738v4
- Date: Sun, 21 Jan 2024 13:27:31 GMT
- Title: Enhancing Visibility in Nighttime Haze Images Using Guided APSF and
Gradient Adaptive Convolution
- Authors: Yeying Jin, Beibei Lin, Wending Yan, Yuan Yuan, Wei Ye, and Robby T.
Tan
- Abstract summary: Existing nighttime dehazing methods often struggle with handling glow or low-light conditions.
In this paper, we enhance the visibility from a single nighttime haze image by suppressing glow and enhancing low-light regions.
Our method achieves a PSNR of 30.38dB, outperforming state-of-the-art methods by 13% on GTA5 nighttime haze dataset.
- Score: 28.685126418090338
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visibility in hazy nighttime scenes is frequently reduced by multiple
factors, including low light, intense glow, light scattering, and the presence
of multicolored light sources. Existing nighttime dehazing methods often
struggle with handling glow or low-light conditions, resulting in either
excessively dark visuals or unsuppressed glow outputs. In this paper, we
enhance the visibility from a single nighttime haze image by suppressing glow
and enhancing low-light regions. To handle glow effects, our framework learns
from the rendered glow pairs. Specifically, a light source aware network is
proposed to detect light sources of night images, followed by the APSF
(Atmospheric Point Spread Function)-guided glow rendering. Our framework is
then trained on the rendered images, resulting in glow suppression. Moreover,
we utilize gradient-adaptive convolution, to capture edges and textures in hazy
scenes. By leveraging extracted edges and textures, we enhance the contrast of
the scene without losing important structural details. To boost low-light
intensity, our network learns an attention map, then adjusted by gamma
correction. This attention has high values on low-light regions and low values
on haze and glow regions. Extensive evaluation on real nighttime haze images,
demonstrates the effectiveness of our method. Our experiments demonstrate that
our method achieves a PSNR of 30.38dB, outperforming state-of-the-art methods
by 13% on GTA5 nighttime haze dataset. Our data and code is available at
https://github.com/jinyeying/nighttime_dehaze.
Related papers
- Beyond Night Visibility: Adaptive Multi-Scale Fusion of Infrared and
Visible Images [49.75771095302775]
We propose an Adaptive Multi-scale Fusion network (AMFusion) with infrared and visible images.
First, we separately fuse spatial and semantic features from infrared and visible images, where the former are used for the adjustment of light distribution.
Second, we utilize detection features extracted by a pre-trained backbone that guide the fusion of semantic features.
Third, we propose a new illumination loss to constrain fusion image with normal light intensity.
arXiv Detail & Related papers (2024-03-02T03:52:07Z) - You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - NDELS: A Novel Approach for Nighttime Dehazing, Low-Light Enhancement,
and Light Suppression [4.976703689624386]
This paper introduces a pioneering solution named Nighttime Dehazing, Low-Light Enhancement, and Light Suppression (NDELS)
NDELS utilizes a unique network that combines three essential processes to enhance visibility, low-light regions, and effectively suppress glare from bright light sources.
The efficacy of NDELS is rigorously validated through extensive comparisons with eight state-of-the-art algorithms across four diverse datasets.
arXiv Detail & Related papers (2023-12-11T21:38:32Z) - Illumination Distillation Framework for Nighttime Person
Re-Identification and A New Benchmark [29.6321130075977]
This paper proposes an Illumination Distillation Framework (IDF) to address the low illumination challenge in nighttime person Re-ID.
IDF consists of a master branch, an illumination enhancement branch, and an illumination distillation module.
We build a real-world nighttime person Re-ID dataset, named Night600, which contains 600 identities.
arXiv Detail & Related papers (2023-08-31T06:45:56Z) - From Generation to Suppression: Towards Effective Irregular Glow Removal
for Nighttime Visibility Enhancement [22.565044107631696]
Existing Low-Light Image Enhancement (LLIE) methods are primarily designed to improve brightness in dark regions, which suffer from severe degradation in nighttime images.
These methods have limited exploration in another major visibility damage, the glow effects in real night scenes.
We propose a new method for learning physical glow generation via multiple scattering estimation according to the Atmospheric Point Spread Function (APSF)
The proposed method is based on zero-shot learning and does not rely on any paired or unpaired training data. Empirical evaluations demonstrate the effectiveness of the proposed method in both glow suppression and low-light enhancement tasks.
arXiv Detail & Related papers (2023-07-31T15:51:15Z) - Flare7K++: Mixing Synthetic and Real Datasets for Nighttime Flare
Removal and Beyond [77.72043833102191]
We introduce the first comprehensive nighttime flare removal dataset, consisting of 962 real-captured flare images (Flare-R) and 7,000 synthetic flares (Flare7K)
Compared to Flare7K, Flare7K++ is particularly effective in eliminating complicated degradation around the light source, which is intractable by using synthetic flares alone.
To address this issue, we additionally provide the annotations of light sources in Flare7K++ and propose a new end-to-end pipeline to preserve the light source while removing lens flares.
arXiv Detail & Related papers (2023-06-07T08:27:44Z) - Boosting Night-time Scene Parsing with Learnable Frequency [53.05778451012621]
Night-Time Scene Parsing (NTSP) is essential to many vision applications, especially for autonomous driving.
Most of the existing methods are proposed for day-time scene parsing.
We show that our method performs favorably against the state-of-the-art methods on the NightCity, NightCity+ and BDD100K-night datasets.
arXiv Detail & Related papers (2022-08-30T13:09:59Z) - Unsupervised Night Image Enhancement: When Layer Decomposition Meets
Light-Effects Suppression [67.7508230688415]
We introduce an unsupervised method that integrates a layer decomposition network and a light-effects suppression network.
Our method outperforms state-of-the-art methods in suppressing night light effects and boosting the intensity of dark regions.
arXiv Detail & Related papers (2022-07-21T16:10:24Z) - Nighttime Dehazing with a Synthetic Benchmark [147.21955799938115]
We propose a novel synthetic method called 3R to simulate nighttime hazy images from daytime clear images.
We generate realistic nighttime hazy images by sampling real-world light colors from a prior empirical distribution.
Experiment results demonstrate their superiority over state-of-the-art methods in terms of both image quality and runtime.
arXiv Detail & Related papers (2020-08-10T02:16:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.