Nighttime Dehazing with a Synthetic Benchmark
- URL: http://arxiv.org/abs/2008.03864v3
- Date: Mon, 19 Oct 2020 00:41:38 GMT
- Title: Nighttime Dehazing with a Synthetic Benchmark
- Authors: Jing Zhang and Yang Cao and Zheng-Jun Zha and Dacheng Tao
- Abstract summary: We propose a novel synthetic method called 3R to simulate nighttime hazy images from daytime clear images.
We generate realistic nighttime hazy images by sampling real-world light colors from a prior empirical distribution.
Experiment results demonstrate their superiority over state-of-the-art methods in terms of both image quality and runtime.
- Score: 147.21955799938115
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Increasing the visibility of nighttime hazy images is challenging because of
uneven illumination from active artificial light sources and haze
absorbing/scattering. The absence of large-scale benchmark datasets hampers
progress in this area. To address this issue, we propose a novel synthetic
method called 3R to simulate nighttime hazy images from daytime clear images,
which first reconstructs the scene geometry, then simulates the light rays and
object reflectance, and finally renders the haze effects. Based on it, we
generate realistic nighttime hazy images by sampling real-world light colors
from a prior empirical distribution. Experiments on the synthetic benchmark
show that the degrading factors jointly reduce the image quality. To address
this issue, we propose an optimal-scale maximum reflectance prior to
disentangle the color correction from haze removal and address them
sequentially. Besides, we also devise a simple but effective learning-based
baseline which has an encoder-decoder structure based on the MobileNet-v2
backbone. Experiment results demonstrate their superiority over
state-of-the-art methods in terms of both image quality and runtime. Both the
dataset and source code will be available at https://github.com/chaimi2013/3R.
Related papers
- bit2bit: 1-bit quanta video reconstruction via self-supervised photon prediction [57.199618102578576]
We propose bit2bit, a new method for reconstructing high-quality image stacks at original resolution from sparse binary quantatemporal image data.
Inspired by recent work on Poisson denoising, we developed an algorithm that creates a dense image sequence from sparse binary photon data.
We present a novel dataset containing a wide range of real SPAD high-speed videos under various challenging imaging conditions.
arXiv Detail & Related papers (2024-10-30T17:30:35Z) - Sun Off, Lights On: Photorealistic Monocular Nighttime Simulation for Robust Semantic Perception [53.631644875171595]
Nighttime scenes are hard to semantically perceive with learned models and annotate for humans.
Our method, named Sun Off, Lights On (SOLO), is the first to perform nighttime simulation on single images in a photorealistic fashion by operating in 3D.
Not only is the visual quality and photorealism of our nighttime images superior to competing approaches including diffusion models, but the former images are also proven more beneficial for semantic nighttime segmentation in day-to-night adaptation.
arXiv Detail & Related papers (2024-07-29T18:00:09Z) - IllumiNeRF: 3D Relighting Without Inverse Rendering [25.642960820693947]
We show how to relight each input image using an image diffusion model conditioned on target environment lighting and estimated object geometry.
We reconstruct a Neural Radiance Field (NeRF) with these relit images, from which we render novel views under the target lighting.
We demonstrate that this strategy is surprisingly competitive and achieves state-of-the-art results on multiple relighting benchmarks.
arXiv Detail & Related papers (2024-06-10T17:59:59Z) - A Semi-supervised Nighttime Dehazing Baseline with Spatial-Frequency Aware and Realistic Brightness Constraint [19.723367790947684]
We propose a semi-supervised model for real-world nighttime dehazing.
First, the spatial attention and frequency spectrum filtering are implemented as a spatial-frequency domain information interaction module.
Second, a pseudo-label-based retraining strategy and a local window-based brightness loss for semi-supervised training process is designed to suppress haze and glow.
arXiv Detail & Related papers (2024-03-27T13:27:02Z) - Enhancing Visibility in Nighttime Haze Images Using Guided APSF and
Gradient Adaptive Convolution [28.685126418090338]
Existing nighttime dehazing methods often struggle with handling glow or low-light conditions.
In this paper, we enhance the visibility from a single nighttime haze image by suppressing glow and enhancing low-light regions.
Our method achieves a PSNR of 30.38dB, outperforming state-of-the-art methods by 13% on GTA5 nighttime haze dataset.
arXiv Detail & Related papers (2023-08-03T12:58:23Z) - NightHazeFormer: Single Nighttime Haze Removal Using Prior Query
Transformer [39.90066556289063]
We propose an end-to-end transformer-based framework for nighttime haze removal, called NightHazeFormer.
Our proposed approach consists of two stages: supervised pre-training and semi-supervised fine-tuning.
Experiments on several synthetic and real-world datasets demonstrate the superiority of our NightHazeFormer over state-of-the-art nighttime haze removal methods.
arXiv Detail & Related papers (2023-05-16T15:26:09Z) - Relightify: Relightable 3D Faces from a Single Image via Diffusion
Models [86.3927548091627]
We present the first approach to use diffusion models as a prior for highly accurate 3D facial BRDF reconstruction from a single image.
In contrast to existing methods, we directly acquire the observed texture from the input image, thus, resulting in more faithful and consistent estimation.
arXiv Detail & Related papers (2023-05-10T11:57:49Z) - Real-Time Radiance Fields for Single-Image Portrait View Synthesis [85.32826349697972]
We present a one-shot method to infer and render a 3D representation from a single unposed image in real-time.
Given a single RGB input, our image encoder directly predicts a canonical triplane representation of a neural radiance field for 3D-aware novel view synthesis via volume rendering.
Our method is fast (24 fps) on consumer hardware, and produces higher quality results than strong GAN-inversion baselines that require test-time optimization.
arXiv Detail & Related papers (2023-05-03T17:56:01Z) - When the Sun Goes Down: Repairing Photometric Losses for All-Day Depth
Estimation [47.617222712429026]
We show how to use a combination of three techniques to allow the existing photometric losses to work for both day and nighttime images.
First, we introduce a per-pixel neural intensity transformation to compensate for the light changes that occur between successive frames.
Second, we predict a per-pixel residual flow map that we use to correct the reprojection correspondences induced by the estimated ego-motion and depth.
arXiv Detail & Related papers (2022-06-28T09:29:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.