Structure Representation Network and Uncertainty Feedback Learning for
Dense Non-Uniform Fog Removal
- URL: http://arxiv.org/abs/2210.03061v1
- Date: Thu, 6 Oct 2022 17:10:57 GMT
- Title: Structure Representation Network and Uncertainty Feedback Learning for
Dense Non-Uniform Fog Removal
- Authors: Yeying Jin, Wending Yan, Wenhan Yang, Robby T. Tan
- Abstract summary: We introduce a structure-representation network with uncertainty feedback learning.
Specifically, we extract the feature representations from a pre-trained Vision Transformer (DINO-ViT) module to recover the background information.
To handle the intractability of estimating the atmospheric light colors, we exploit the grayscale version of our input image.
- Score: 64.77435210892041
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few existing image defogging or dehazing methods consider dense and
non-uniform particle distributions, which usually happen in smoke, dust and
fog. Dealing with these dense and/or non-uniform distributions can be
intractable, since fog's attenuation and airlight (or veiling effect)
significantly weaken the background scene information in the input image. To
address this problem, we introduce a structure-representation network with
uncertainty feedback learning. Specifically, we extract the feature
representations from a pre-trained Vision Transformer (DINO-ViT) module to
recover the background information. To guide our network to focus on
non-uniform fog areas, and then remove the fog accordingly, we introduce the
uncertainty feedback learning, which produces the uncertainty maps, that have
higher uncertainty in denser fog regions, and can be regarded as an attention
map that represents fog's density and uneven distribution. Based on the
uncertainty map, our feedback network refines our defogged output iteratively.
Moreover, to handle the intractability of estimating the atmospheric light
colors, we exploit the grayscale version of our input image, since it is less
affected by varying light colors that are possibly present in the input image.
The experimental results demonstrate the effectiveness of our method both
quantitatively and qualitatively compared to the state-of-the-art methods in
handling dense and non-uniform fog or smoke.
Related papers
- Inhomogeneous illumination image enhancement under ex-tremely low visibility condition [3.534798835599242]
Imaging through dense fog presents unique challenges, with essential visual information crucial for applications like object detection and recognition obscured, thereby hindering conventional image processing methods.
We introduce in this paper a novel method that adaptively filters background illumination based on Structural Differential and Integral Filtering (F) to enhance only vital signal information.
Our findings demonstrate that our proposed method significantly enhances signal clarity under extremely low visibility conditions and out-performs existing techniques, offering substantial improvements for deep fog imaging applications.
arXiv Detail & Related papers (2024-04-26T16:09:42Z) - Beyond Night Visibility: Adaptive Multi-Scale Fusion of Infrared and
Visible Images [49.75771095302775]
We propose an Adaptive Multi-scale Fusion network (AMFusion) with infrared and visible images.
First, we separately fuse spatial and semantic features from infrared and visible images, where the former are used for the adjustment of light distribution.
Second, we utilize detection features extracted by a pre-trained backbone that guide the fusion of semantic features.
Third, we propose a new illumination loss to constrain fusion image with normal light intensity.
arXiv Detail & Related papers (2024-03-02T03:52:07Z) - CFDNet: A Generalizable Foggy Stereo Matching Network with Contrastive
Feature Distillation [11.655465312241699]
We introduce a framework based on contrastive feature distillation (CFD)
This strategy combines feature distillation from merged clean-fog features with contrastive learning, ensuring balanced dependence on fog depth hints and clean matching features.
arXiv Detail & Related papers (2024-02-28T09:12:01Z) - Decomposition-based and Interference Perception for Infrared and Visible
Image Fusion in Complex Scenes [4.919706769234434]
We propose a decomposition-based and interference perception image fusion method.
We classify the pixels of visible image from the degree of scattering of light transmission, based on which we then separate the detail and energy information of the image.
This refined decomposition facilitates the proposed model in identifying more interfering pixels that are in complex scenes.
arXiv Detail & Related papers (2024-02-03T09:27:33Z) - DHFormer: A Vision Transformer-Based Attention Module for Image Dehazing [0.0]
Images acquired in hazy conditions have degradations induced in them.
Prior-based and learning-based approaches have been proposed to mitigate the effect of haze and generate haze-free images.
In this paper, a method that uses residual learning and vision transformers in an attention module is proposed.
arXiv Detail & Related papers (2023-12-15T17:05:32Z) - SCANet: Self-Paced Semi-Curricular Attention Network for Non-Homogeneous
Image Dehazing [56.900964135228435]
Existing homogeneous dehazing methods struggle to handle the non-uniform distribution of haze in a robust manner.
We propose a novel self-paced semi-curricular attention network, called SCANet, for non-homogeneous image dehazing.
Our approach consists of an attention generator network and a scene reconstruction network.
arXiv Detail & Related papers (2023-04-17T17:05:29Z) - CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature
Ensemble for Multi-modality Image Fusion [72.8898811120795]
We propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion.
Our method achieves state-of-the-art (SOTA) performance under both subjective and objective evaluation.
arXiv Detail & Related papers (2022-11-20T12:02:07Z) - Learning to restore images degraded by atmospheric turbulence using
uncertainty [93.72048616001064]
Atmospheric turbulence can significantly degrade the quality of images acquired by long-range imaging systems.
We propose a deep learning-based approach for restring a single image degraded by atmospheric turbulence.
arXiv Detail & Related papers (2022-07-07T17:24:52Z) - Both Style and Fog Matter: Cumulative Domain Adaptation for Semantic
Foggy Scene Understanding [63.99301797430936]
We propose a new pipeline to cumulatively adapt style, fog and the dual-factor (style and fog)
Specifically, we devise a unified framework to disentangle the style factor and the fog factor separately, and then the dual-factor from images in different domains.
Our method achieves the state-of-the-art performance on three benchmarks and shows generalization ability in rainy and snowy scenes.
arXiv Detail & Related papers (2021-12-01T13:21:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.