Contrastive Learning for Compact Single Image Dehazing
- URL: http://arxiv.org/abs/2104.09367v1
- Date: Mon, 19 Apr 2021 14:56:21 GMT
- Title: Contrastive Learning for Compact Single Image Dehazing
- Authors: Haiyan Wu, Yanyun Qu, Shaohui Lin, Jian Zhou, Ruizhi Qiao, Zhizhong
Zhang, Yuan Xie, Lizhuang Ma
- Abstract summary: We propose a novel contrastive regularization (CR) built upon contrastive learning to exploit both the information of hazy images and clear images as negative and positive samples.
CR ensures that the restored image is pulled to closer to the clear image and pushed to far away from the hazy image in the representation space.
Considering trade-off between performance and memory storage, we develop a compact dehazing network based on autoencoder-like framework.
- Score: 41.83007400559068
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Single image dehazing is a challenging ill-posed problem due to the severe
information degeneration. However, existing deep learning based dehazing
methods only adopt clear images as positive samples to guide the training of
dehazing network while negative information is unexploited. Moreover, most of
them focus on strengthening the dehazing network with an increase of depth and
width, leading to a significant requirement of computation and memory. In this
paper, we propose a novel contrastive regularization (CR) built upon
contrastive learning to exploit both the information of hazy images and clear
images as negative and positive samples, respectively. CR ensures that the
restored image is pulled to closer to the clear image and pushed to far away
from the hazy image in the representation space. Furthermore, considering
trade-off between performance and memory storage, we develop a compact dehazing
network based on autoencoder-like (AE) framework. It involves an adaptive mixup
operation and a dynamic feature enhancement module, which can benefit from
preserving information flow adaptively and expanding the receptive field to
improve the network's transformation capability, respectively. We term our
dehazing network with autoencoder and contrastive regularization as AECR-Net.
The extensive experiments on synthetic and real-world datasets demonstrate that
our AECR-Net surpass the state-of-the-art approaches. The code is released in
https://github.com/GlassyWu/AECR-Net.
Related papers
- DRACO-DehazeNet: An Efficient Image Dehazing Network Combining Detail Recovery and a Novel Contrastive Learning Paradigm [3.649619954898362]
Detail Recovery And Contrastive DehazeNet is a detailed image recovery network that tailors enhancements to specific dehazed scene contexts.
A major innovation is its ability to train effectively with limited data, achieved through a novel quadruplet loss-based contrastive dehazing paradigm.
arXiv Detail & Related papers (2024-10-18T16:48:31Z) - WTCL-Dehaze: Rethinking Real-world Image Dehazing via Wavelet Transform and Contrastive Learning [17.129068060454255]
Single image dehazing is essential for applications such as autonomous driving and surveillance.
We propose an enhanced semi-supervised dehazing network that integrates Contrastive Loss and Discrete Wavelet Transform.
Our proposed algorithm achieves superior performance and improved robustness compared to state-of-the-art single image dehazing methods.
arXiv Detail & Related papers (2024-10-07T05:36:11Z) - AoSRNet: All-in-One Scene Recovery Networks via Multi-knowledge
Integration [17.070755601209136]
We propose an all-in-one scene recovery network via multi-knowledge integration (termed AoSRNet)
It combines gamma correction (GC) and optimized linear stretching (OLS) to create the detail enhancement module (DEM) and color restoration module ( CRM)
Comprehensive experimental results demonstrate the effectiveness and stability of AoSRNet compared to other state-of-the-art methods.
arXiv Detail & Related papers (2024-02-06T06:12:03Z) - HAT: Hybrid Attention Transformer for Image Restoration [61.74223315807691]
Transformer-based methods have shown impressive performance in image restoration tasks, such as image super-resolution and denoising.
We propose a new Hybrid Attention Transformer (HAT) to activate more input pixels for better restoration.
Our HAT achieves state-of-the-art performance both quantitatively and qualitatively.
arXiv Detail & Related papers (2023-09-11T05:17:55Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - RBSR: Efficient and Flexible Recurrent Network for Burst
Super-Resolution [57.98314517861539]
Burst super-resolution (BurstSR) aims at reconstructing a high-resolution (HR) image from a sequence of low-resolution (LR) and noisy images.
In this paper, we suggest fusing cues frame-by-frame with an efficient and flexible recurrent network.
arXiv Detail & Related papers (2023-06-30T12:14:13Z) - Rethinking Performance Gains in Image Dehazing Networks [25.371802581339576]
We make minimal modifications to popular U-Net to obtain a compact dehazing network.
Specifically, we swap out the convolutional blocks in U-Net for residual blocks with the gating mechanism.
With a significantly reduced overhead, gUNet is superior to state-of-the-art methods on multiple image dehazing datasets.
arXiv Detail & Related papers (2022-09-23T07:14:48Z) - Cross-receptive Focused Inference Network for Lightweight Image
Super-Resolution [64.25751738088015]
Transformer-based methods have shown impressive performance in single image super-resolution (SISR) tasks.
Transformers that need to incorporate contextual information to extract features dynamically are neglected.
We propose a lightweight Cross-receptive Focused Inference Network (CFIN) that consists of a cascade of CT Blocks mixed with CNN and Transformer.
arXiv Detail & Related papers (2022-07-06T16:32:29Z) - Single Image Dehazing with An Independent Detail-Recovery Network [117.86146907611054]
We propose a single image dehazing method with an independent Detail Recovery Network (DRN)
The DRN aims to recover the dehazed image details through local and global branches respectively.
Our method outperforms the state-of-the-art dehazing methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2021-09-22T02:49:43Z) - Image deblurring based on lightweight multi-information fusion network [6.848061582669787]
We propose a lightweight multiinformation fusion network (LMFN) for image deblurring.
In the encoding stage, the image feature is reduced to various smallscale spaces for multi-scale information extraction and fusion.
Then, a distillation network is used in the decoding stage, which allows the network benefit the most from residual learning.
Our network can achieve state-of-the-art image deblurring result with smaller number of parameters and outperforms existing methods in model complexity.
arXiv Detail & Related papers (2021-01-14T00:37:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.