NightHaze: Nighttime Image Dehazing via Self-Prior Learning
- URL: http://arxiv.org/abs/2403.07408v1
- Date: Tue, 12 Mar 2024 08:35:42 GMT
- Title: NightHaze: Nighttime Image Dehazing via Self-Prior Learning
- Authors: Beibei Lin, Yeying Jin, Wending Yan, Wei Ye, Yuan Yuan and Robby T.
Tan
- Abstract summary: Masked autoencoder (MAE) shows that severe augmentation during training produces robust representations for high-level tasks.
We propose a novel nighttime image dehazing method with self-prior learning.
Our NightHaze, especially our MAE-like self-prior learning, shows that models trained with severe augmentation effectively improve the visibility of input haze images.
- Score: 30.395213789178275
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Masked autoencoder (MAE) shows that severe augmentation during training
produces robust representations for high-level tasks. This paper brings the
MAE-like framework to nighttime image enhancement, demonstrating that severe
augmentation during training produces strong network priors that are resilient
to real-world night haze degradations. We propose a novel nighttime image
dehazing method with self-prior learning. Our main novelty lies in the design
of severe augmentation, which allows our model to learn robust priors. Unlike
MAE that uses masking, we leverage two key challenging factors of nighttime
images as augmentation: light effects and noise. During training, we
intentionally degrade clear images by blending them with light effects as well
as by adding noise, and subsequently restore the clear images. This enables our
model to learn clear background priors. By increasing the noise values to
approach as high as the pixel intensity values of the glow and light effect
blended images, our augmentation becomes severe, resulting in stronger priors.
While our self-prior learning is considerably effective in suppressing glow and
revealing details of background scenes, in some cases, there are still some
undesired artifacts that remain, particularly in the forms of over-suppression.
To address these artifacts, we propose a self-refinement module based on the
semi-supervised teacher-student framework. Our NightHaze, especially our
MAE-like self-prior learning, shows that models trained with severe
augmentation effectively improve the visibility of input haze images,
approaching the clarity of clear nighttime images. Extensive experiments
demonstrate that our NightHaze achieves state-of-the-art performance,
outperforming existing nighttime image dehazing methods by a substantial margin
of 15.5% for MUSIQ and 23.5% for ClipIQA.
Related papers
- Night-to-Day Translation via Illumination Degradation Disentanglement [51.77716565167767]
Night-to-Day translation aims to achieve day-like vision for nighttime scenes.
processing night images with complex degradations remains a significant challenge under unpaired conditions.
We propose textbfN2D3 to identify different degradation patterns in nighttime images.
arXiv Detail & Related papers (2024-11-21T08:51:32Z) - DAP-LED: Learning Degradation-Aware Priors with CLIP for Joint Low-light Enhancement and Deblurring [14.003870853594972]
We propose a novel transformer-based joint learning framework, named DAP-LED.
It can jointly achieve low-light enhancement and deblurring, benefiting downstream tasks, such as depth estimation, segmentation, and detection in the dark.
The key insight is to leverage CLIP to adaptively learn the degradation levels from images at night.
arXiv Detail & Related papers (2024-09-20T13:37:53Z) - FreeEnhance: Tuning-Free Image Enhancement via Content-Consistent Noising-and-Denoising Process [120.91393949012014]
FreeEnhance is a framework for content-consistent image enhancement using off-the-shelf image diffusion models.
In the noising stage, FreeEnhance is devised to add lighter noise to the region with higher frequency to preserve the high-frequent patterns in the original image.
In the denoising stage, we present three target properties as constraints to regularize the predicted noise, enhancing images with high acutance and high visual quality.
arXiv Detail & Related papers (2024-09-11T17:58:50Z) - Self-Supervised Monocular Depth Estimation in the Dark: Towards Data Distribution Compensation [24.382795861986803]
Using night images for self-supervision is unreliable because the photometric consistency assumption is usually violated in the videos taken under complex lighting conditions.
We propose a self-supervised nighttime monocular depth estimation method that does not use any night images during training.
arXiv Detail & Related papers (2024-04-22T03:39:03Z) - From Generation to Suppression: Towards Effective Irregular Glow Removal
for Nighttime Visibility Enhancement [22.565044107631696]
Existing Low-Light Image Enhancement (LLIE) methods are primarily designed to improve brightness in dark regions, which suffer from severe degradation in nighttime images.
These methods have limited exploration in another major visibility damage, the glow effects in real night scenes.
We propose a new method for learning physical glow generation via multiple scattering estimation according to the Atmospheric Point Spread Function (APSF)
The proposed method is based on zero-shot learning and does not rely on any paired or unpaired training data. Empirical evaluations demonstrate the effectiveness of the proposed method in both glow suppression and low-light enhancement tasks.
arXiv Detail & Related papers (2023-07-31T15:51:15Z) - NightHazeFormer: Single Nighttime Haze Removal Using Prior Query
Transformer [39.90066556289063]
We propose an end-to-end transformer-based framework for nighttime haze removal, called NightHazeFormer.
Our proposed approach consists of two stages: supervised pre-training and semi-supervised fine-tuning.
Experiments on several synthetic and real-world datasets demonstrate the superiority of our NightHazeFormer over state-of-the-art nighttime haze removal methods.
arXiv Detail & Related papers (2023-05-16T15:26:09Z) - Masked Image Training for Generalizable Deep Image Denoising [53.03126421917465]
We present a novel approach to enhance the generalization performance of denoising networks.
Our method involves masking random pixels of the input image and reconstructing the missing information during training.
Our approach exhibits better generalization ability than other deep learning models and is directly applicable to real-world scenarios.
arXiv Detail & Related papers (2023-03-23T09:33:44Z) - When the Sun Goes Down: Repairing Photometric Losses for All-Day Depth
Estimation [47.617222712429026]
We show how to use a combination of three techniques to allow the existing photometric losses to work for both day and nighttime images.
First, we introduce a per-pixel neural intensity transformation to compensate for the light changes that occur between successive frames.
Second, we predict a per-pixel residual flow map that we use to correct the reprojection correspondences induced by the estimated ego-motion and depth.
arXiv Detail & Related papers (2022-06-28T09:29:55Z) - Learning Flow-based Feature Warping for Face Frontalization with
Illumination Inconsistent Supervision [73.18554605744842]
Flow-based Feature Warping Model (FFWM) learns to synthesize photo-realistic and illumination preserving frontal images.
An Illumination Preserving Module (IPM) is proposed to learn illumination preserving image synthesis.
A Warp Attention Module (WAM) is introduced to reduce the pose discrepancy in the feature level.
arXiv Detail & Related papers (2020-08-16T06:07:00Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.