Robust Single Image Dehazing Based on Consistent and Contrast-Assisted
Reconstruction
- URL: http://arxiv.org/abs/2203.15325v1
- Date: Tue, 29 Mar 2022 08:11:04 GMT
- Title: Robust Single Image Dehazing Based on Consistent and Contrast-Assisted
Reconstruction
- Authors: De Cheng, Yan Li, Dingwen Zhang, Nannan Wang, Xinbo Gao, Jiande Sun
- Abstract summary: We propose a novel density-variational learning framework to improve the robustness of the image dehzing model.
Specifically, the dehazing network is optimized under the consistency-regularized framework.
Our method significantly surpasses the state-of-the-art approaches.
- Score: 95.5735805072852
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Single image dehazing as a fundamental low-level vision task, is essential
for the development of robust intelligent surveillance system. In this paper,
we make an early effort to consider dehazing robustness under variational haze
density, which is a realistic while under-studied problem in the research filed
of singe image dehazing. To properly address this problem, we propose a novel
density-variational learning framework to improve the robustness of the image
dehzing model assisted by a variety of negative hazy images, to better deal
with various complex hazy scenarios. Specifically, the dehazing network is
optimized under the consistency-regularized framework with the proposed
Contrast-Assisted Reconstruction Loss (CARL). The CARL can fully exploit the
negative information to facilitate the traditional positive-orient dehazing
objective function, by squeezing the dehazed image to its clean target from
different directions. Meanwhile, the consistency regularization keeps
consistent outputs given multi-level hazy images, thus improving the model
robustness. Extensive experimental results on two synthetic and three
real-world datasets demonstrate that our method significantly surpasses the
state-of-the-art approaches.
Related papers
- WTCL-Dehaze: Rethinking Real-world Image Dehazing via Wavelet Transform and Contrastive Learning [17.129068060454255]
Single image dehazing is essential for applications such as autonomous driving and surveillance.
We propose an enhanced semi-supervised dehazing network that integrates Contrastive Loss and Discrete Wavelet Transform.
Our proposed algorithm achieves superior performance and improved robustness compared to state-of-the-art single image dehazing methods.
arXiv Detail & Related papers (2024-10-07T05:36:11Z) - Efficient One-Step Diffusion Refinement for Snapshot Compressive Imaging [8.819370643243012]
Coded Aperture Snapshot Spectral Imaging (CASSI) is a crucial technique for capturing three-dimensional multispectral images (MSIs)
Current state-of-the-art methods, predominantly end-to-end, face limitations in reconstructing high-frequency details.
This paper introduces a novel one-step Diffusion Probabilistic Model within a self-supervised adaptation framework for Snapshot Compressive Imaging.
arXiv Detail & Related papers (2024-09-11T17:02:10Z) - Unifying Correspondence, Pose and NeRF for Pose-Free Novel View Synthesis from Stereo Pairs [57.492124844326206]
This work delves into the task of pose-free novel view synthesis from stereo pairs, a challenging and pioneering task in 3D vision.
Our innovative framework, unlike any before, seamlessly integrates 2D correspondence matching, camera pose estimation, and NeRF rendering, fostering a synergistic enhancement of these tasks.
arXiv Detail & Related papers (2023-12-12T13:22:44Z) - Separate-and-Enhance: Compositional Finetuning for Text2Image Diffusion
Models [58.46926334842161]
This work illuminates the fundamental reasons for such misalignment, pinpointing issues related to low attention activation scores and mask overlaps.
We propose two novel objectives, the Separate loss and the Enhance loss, that reduce object mask overlaps and maximize attention scores.
Our method diverges from conventional test-time-adaptation techniques, focusing on finetuning critical parameters, which enhances scalability and generalizability.
arXiv Detail & Related papers (2023-12-10T22:07:42Z) - SCRNet: a Retinex Structure-based Low-light Enhancement Model Guided by
Spatial Consistency [22.54951703413469]
We present a novel low-light image enhancement model, termed Spatial Consistency Retinex Network (SCRNet)
Our proposed model incorporates three levels of consistency: channel level, semantic level, and texture level, inspired by the principle of spatial consistency.
Extensive evaluations on various low-light image datasets demonstrate that our proposed SCRNet outshines existing state-of-the-art methods.
arXiv Detail & Related papers (2023-05-14T03:32:19Z) - Curricular Contrastive Regularization for Physics-aware Single Image
Dehazing [56.392696439577165]
We propose a novel curricular contrastive regularization targeted at a consensual contrastive space as opposed to a non-consensual one.
Our negatives, which provide better lower-bound constraints, can be assembled from 1) the hazy image, and 2) corresponding restorations by other existing methods.
With the unit, as well as curricular contrastive regularization, we establish our dehazing network, named C2PNet.
arXiv Detail & Related papers (2023-03-24T18:18:25Z) - Bridging Synthetic and Real Images: a Transferable and Multiple
Consistency aided Fundus Image Enhancement Framework [61.74188977009786]
We propose an end-to-end optimized teacher-student framework to simultaneously conduct image enhancement and domain adaptation.
We also propose a novel multi-stage multi-attention guided enhancement network (MAGE-Net) as the backbones of our teacher and student network.
arXiv Detail & Related papers (2023-02-23T06:16:15Z) - Rich Feature Distillation with Feature Affinity Module for Efficient
Image Dehazing [1.1470070927586016]
This work introduces a simple, lightweight, and efficient framework for single-image haze removal.
We exploit rich "dark-knowledge" information from a lightweight pre-trained super-resolution model via the notion of heterogeneous knowledge distillation.
Our experiments are carried out on the RESIDE-Standard dataset to demonstrate the robustness of our framework to the synthetic and real-world domains.
arXiv Detail & Related papers (2022-07-13T18:32:44Z) - Limited-angle tomographic reconstruction of dense layered objects by
dynamical machine learning [68.9515120904028]
Limited-angle tomography of strongly scattering quasi-transparent objects is a challenging, highly ill-posed problem.
Regularizing priors are necessary to reduce artifacts by improving the condition of such problems.
We devised a recurrent neural network (RNN) architecture with a novel split-convolutional gated recurrent unit (SC-GRU) as the building block.
arXiv Detail & Related papers (2020-07-21T11:48:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.