Seeing Beyond Haze: Generative Nighttime Image Dehazing
- URL: http://arxiv.org/abs/2503.08073v1
- Date: Tue, 11 Mar 2025 06:08:25 GMT
- Title: Seeing Beyond Haze: Generative Nighttime Image Dehazing
- Authors: Beibei Lin, Stephen Lin, Robby Tan,
- Abstract summary: BeyondHaze is a generative nighttime dehazing method that infers background information in regions where it may be absent.<n>Our approach is developed on two main ideas: gaining strong background priors by adapting image diffusion models to the nighttime dehazing problem, and enhancing generative ability for haze- and glow-obscured scene areas through guided training.<n>Experiments on real-world images demonstrate that BeyondHaze effectively restores visibility in dense nighttime haze.
- Score: 16.949777020546264
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Nighttime image dehazing is particularly challenging when dense haze and intense glow severely degrade or completely obscure background information. Existing methods often encounter difficulties due to insufficient background priors and limited generative ability, both essential for handling such conditions. In this paper, we introduce BeyondHaze, a generative nighttime dehazing method that not only significantly reduces haze and glow effects but also infers background information in regions where it may be absent. Our approach is developed on two main ideas: gaining strong background priors by adapting image diffusion models to the nighttime dehazing problem, and enhancing generative ability for haze- and glow-obscured scene areas through guided training. Task-specific nighttime dehazing knowledge is distilled into an image diffusion model in a manner that preserves its capacity to generate clean images. The diffusion model is additionally trained on image pairs designed to improve its ability to generate background details and content that are missing in the input image due to haze effects. Since generative models are susceptible to hallucinations, we develop our framework to allow user control over the generative level, balancing visual realism and factual accuracy. Experiments on real-world images demonstrate that BeyondHaze effectively restores visibility in dense nighttime haze.
Related papers
- Learning Hazing to Dehazing: Towards Realistic Haze Generation for Real-World Image Dehazing [59.43187521828543]
We introduce a novel hazing-dehazing pipeline consisting of a Realistic Hazy Image Generation framework (HazeGen) and a Diffusion-based Dehazing framework (DiffDehaze)
HazeGen harnesses robust generative diffusion priors of real-world hazy images embedded in a pre-trained text-to-image diffusion model.
By employing specialized hybrid training and blended sampling strategies, HazeGen produces realistic and diverse hazy images as high-quality training data for DiffDehaze.
arXiv Detail & Related papers (2025-03-25T01:55:39Z) - Learning Flow Fields in Attention for Controllable Person Image Generation [59.10843756343987]
Controllable person image generation aims to generate a person image conditioned on reference images.<n>We propose learning flow fields in attention (Leffa), which explicitly guides the target query to attend to the correct reference key.<n>Leffa achieves state-of-the-art performance in controlling appearance (virtual try-on) and pose (pose transfer), significantly reducing fine-grained detail distortion.
arXiv Detail & Related papers (2024-12-11T15:51:14Z) - NightHaze: Nighttime Image Dehazing via Self-Prior Learning [28.685126418090338]
Masked autoencoder (MAE) shows that severe augmentation during training produces robust representations for high-level tasks.<n>We propose a novel nighttime image dehazing method with self-prior learning.<n>Our NightHaze, especially our MAE-like self-prior learning, shows that models trained with severe augmentation effectively improve the visibility of input haze images.
arXiv Detail & Related papers (2024-03-12T08:35:42Z) - Enhancing Visibility in Nighttime Haze Images Using Guided APSF and
Gradient Adaptive Convolution [28.685126418090338]
Existing nighttime dehazing methods often struggle with handling glow or low-light conditions.
In this paper, we enhance the visibility from a single nighttime haze image by suppressing glow and enhancing low-light regions.
Our method achieves a PSNR of 30.38dB, outperforming state-of-the-art methods by 13% on GTA5 nighttime haze dataset.
arXiv Detail & Related papers (2023-08-03T12:58:23Z) - NightHazeFormer: Single Nighttime Haze Removal Using Prior Query
Transformer [39.90066556289063]
We propose an end-to-end transformer-based framework for nighttime haze removal, called NightHazeFormer.
Our proposed approach consists of two stages: supervised pre-training and semi-supervised fine-tuning.
Experiments on several synthetic and real-world datasets demonstrate the superiority of our NightHazeFormer over state-of-the-art nighttime haze removal methods.
arXiv Detail & Related papers (2023-05-16T15:26:09Z) - SCANet: Self-Paced Semi-Curricular Attention Network for Non-Homogeneous
Image Dehazing [56.900964135228435]
Existing homogeneous dehazing methods struggle to handle the non-uniform distribution of haze in a robust manner.
We propose a novel self-paced semi-curricular attention network, called SCANet, for non-homogeneous image dehazing.
Our approach consists of an attention generator network and a scene reconstruction network.
arXiv Detail & Related papers (2023-04-17T17:05:29Z) - See Blue Sky: Deep Image Dehaze Using Paired and Unpaired Training
Images [73.23687409870656]
We propose a cycle generative adversarial network to construct a novel end-to-end image dehaze model.
We adopt outdoor image datasets to train our model, which includes a set of real-world unpaired image dataset and a set of paired image dataset.
Based on the cycle structure, our model adds four different kinds of loss function to constrain the effect including adversarial loss, cycle consistency loss, photorealism loss and paired L1 loss.
arXiv Detail & Related papers (2022-10-14T07:45:33Z) - Unsupervised Neural Rendering for Image Hazing [31.108654945661705]
Image hazing aims to render a hazy image from a given clean one, which could be applied to a variety of practical applications such as gaming, filming, photographic filtering, and image dehazing.
We propose a neural rendering method for image hazing, dubbed as HazeGEN. To be specific, HazeGEN is a knowledge-driven neural network which estimates the transmission map by leveraging a new prior.
To adaptively learn the airlight, we build a neural module based on another new prior, i.e., the rendered hazy image and the exemplar are similar in the airlight distribution.
arXiv Detail & Related papers (2021-07-14T13:15:14Z) - Non-Homogeneous Haze Removal via Artificial Scene Prior and
Bidimensional Graph Reasoning [52.07698484363237]
We propose a Non-Homogeneous Haze Removal Network (NHRN) via artificial scene prior and bidimensional graph reasoning.
Our method achieves superior performance over many state-of-the-art algorithms for both the single image dehazing and hazy image understanding tasks.
arXiv Detail & Related papers (2021-04-05T13:04:44Z) - Learning Flow-based Feature Warping for Face Frontalization with
Illumination Inconsistent Supervision [73.18554605744842]
Flow-based Feature Warping Model (FFWM) learns to synthesize photo-realistic and illumination preserving frontal images.
An Illumination Preserving Module (IPM) is proposed to learn illumination preserving image synthesis.
A Warp Attention Module (WAM) is introduced to reduce the pose discrepancy in the feature level.
arXiv Detail & Related papers (2020-08-16T06:07:00Z) - Nighttime Dehazing with a Synthetic Benchmark [147.21955799938115]
We propose a novel synthetic method called 3R to simulate nighttime hazy images from daytime clear images.
We generate realistic nighttime hazy images by sampling real-world light colors from a prior empirical distribution.
Experiment results demonstrate their superiority over state-of-the-art methods in terms of both image quality and runtime.
arXiv Detail & Related papers (2020-08-10T02:16:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.