SCANet: Self-Paced Semi-Curricular Attention Network for Non-Homogeneous
Image Dehazing
- URL: http://arxiv.org/abs/2304.08444v1
- Date: Mon, 17 Apr 2023 17:05:29 GMT
- Title: SCANet: Self-Paced Semi-Curricular Attention Network for Non-Homogeneous
Image Dehazing
- Authors: Yu Guo, Yuan Gao, Ryan Wen Liu, Yuxu Lu, Jingxiang Qu, Shengfeng He,
Wenqi Ren
- Abstract summary: Existing homogeneous dehazing methods struggle to handle the non-uniform distribution of haze in a robust manner.
We propose a novel self-paced semi-curricular attention network, called SCANet, for non-homogeneous image dehazing.
Our approach consists of an attention generator network and a scene reconstruction network.
- Score: 56.900964135228435
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The presence of non-homogeneous haze can cause scene blurring, color
distortion, low contrast, and other degradations that obscure texture details.
Existing homogeneous dehazing methods struggle to handle the non-uniform
distribution of haze in a robust manner. The crucial challenge of
non-homogeneous dehazing is to effectively extract the non-uniform distribution
features and reconstruct the details of hazy areas with high quality. In this
paper, we propose a novel self-paced semi-curricular attention network, called
SCANet, for non-homogeneous image dehazing that focuses on enhancing
haze-occluded regions. Our approach consists of an attention generator network
and a scene reconstruction network. We use the luminance differences of images
to restrict the attention map and introduce a self-paced semi-curricular
learning strategy to reduce learning ambiguity in the early stages of training.
Extensive quantitative and qualitative experiments demonstrate that our SCANet
outperforms many state-of-the-art methods. The code is publicly available at
https://github.com/gy65896/SCANet.
Related papers
- DRACO-DehazeNet: An Efficient Image Dehazing Network Combining Detail Recovery and a Novel Contrastive Learning Paradigm [3.649619954898362]
Detail Recovery And Contrastive DehazeNet is a detailed image recovery network that tailors enhancements to specific dehazed scene contexts.
A major innovation is its ability to train effectively with limited data, achieved through a novel quadruplet loss-based contrastive dehazing paradigm.
arXiv Detail & Related papers (2024-10-18T16:48:31Z) - Semi-LLIE: Semi-supervised Contrastive Learning with Mamba-based Low-light Image Enhancement [59.17372460692809]
This work proposes a mean-teacher-based semi-supervised low-light enhancement (Semi-LLIE) framework that integrates the unpaired data into model training.
We introduce a semantic-aware contrastive loss to faithfully transfer the illumination distribution, contributing to enhancing images with natural colors.
We also propose novel perceptive loss based on the large-scale vision-language Recognize Anything Model (RAM) to help generate enhanced images with richer textual details.
arXiv Detail & Related papers (2024-09-25T04:05:32Z) - Coarse-to-Fine Latent Diffusion for Pose-Guided Person Image Synthesis [65.7968515029306]
We propose a novel Coarse-to-Fine Latent Diffusion (CFLD) method for Pose-Guided Person Image Synthesis (PGPIS)
A perception-refined decoder is designed to progressively refine a set of learnable queries and extract semantic understanding of person images as a coarse-grained prompt.
arXiv Detail & Related papers (2024-02-28T06:07:07Z) - Free-ATM: Exploring Unsupervised Learning on Diffusion-Generated Images
with Free Attention Masks [64.67735676127208]
Text-to-image diffusion models have shown great potential for benefiting image recognition.
Although promising, there has been inadequate exploration dedicated to unsupervised learning on diffusion-generated images.
We introduce customized solutions by fully exploiting the aforementioned free attention masks.
arXiv Detail & Related papers (2023-08-13T10:07:46Z) - Restoring Vision in Hazy Weather with Hierarchical Contrastive Learning [53.85892601302974]
We propose an effective image dehazing method named Hierarchical Contrastive Dehazing (HCD)
HCD consists of a hierarchical dehazing network (HDN) and a novel hierarchical contrastive loss (HCL)
arXiv Detail & Related papers (2022-12-22T03:57:06Z) - Towards Homogeneous Modality Learning and Multi-Granularity Information
Exploration for Visible-Infrared Person Re-Identification [16.22986967958162]
Visible-infrared person re-identification (VI-ReID) is a challenging and essential task, which aims to retrieve a set of person images over visible and infrared camera views.
Previous methods attempt to apply generative adversarial network (GAN) to generate the modality-consisitent data.
In this work, we address cross-modality matching problem with Aligned Grayscale Modality (AGM), an unified dark-line spectrum that reformulates visible-infrared dual-mode learning as a gray-gray single-mode learning problem.
arXiv Detail & Related papers (2022-04-11T03:03:19Z) - PIE-Net: Photometric Invariant Edge Guided Network for Intrinsic Image
Decomposition [17.008724191799313]
Intrinsic image decomposition is the process of recovering the image formation components (reflectance and shading) from an image.
In this paper, an end-to-end edge-driven hybrid CNN approach is proposed for intrinsic image decomposition.
arXiv Detail & Related papers (2022-03-30T20:46:15Z) - You Only Look Yourself: Unsupervised and Untrained Single Image Dehazing
Neural Network [63.2086502120071]
We study how to make deep learning achieve image dehazing without training on the ground-truth clean image (unsupervised) and a image collection (untrained)
An unsupervised neural network will avoid the intensive labor collection of hazy-clean image pairs, and an untrained model is a real'' single image dehazing approach.
Motivated by the layer disentanglement idea, we propose a novel method, called you only look yourself (textbfYOLY) which could be one of the first unsupervised and untrained neural networks for image dehazing.
arXiv Detail & Related papers (2020-06-30T14:05:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.