ClassLIE: Structure- and Illumination-Adaptive Classification for
Low-Light Image Enhancement
- URL: http://arxiv.org/abs/2312.13265v1
- Date: Wed, 20 Dec 2023 18:43:20 GMT
- Title: ClassLIE: Structure- and Illumination-Adaptive Classification for
Low-Light Image Enhancement
- Authors: Zixiang Wei, Yiting Wang, Lichao Sun, Athanasios V. Vasilakos, Lin
Wang
- Abstract summary: This paper proposes a novel framework, called ClassLIE, that combines the potential of CNNs and transformers.
It classifies and adaptively learns the structural and illumination information from the low-light images in a holistic and regional manner.
Experiments on five benchmark datasets consistently show our ClassLIE achieves new state-of-the-art performance.
- Score: 17.51201873607536
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-light images often suffer from limited visibility and multiple types of
degradation, rendering low-light image enhancement (LIE) a non-trivial task.
Some endeavors have been recently made to enhance low-light images using
convolutional neural networks (CNNs). However, they have low efficiency in
learning the structural information and diverse illumination levels at the
local regions of an image. Consequently, the enhanced results are affected by
unexpected artifacts, such as unbalanced exposure, blur, and color bias. To
this end, this paper proposes a novel framework, called ClassLIE, that combines
the potential of CNNs and transformers. It classifies and adaptively learns the
structural and illumination information from the low-light images in a holistic
and regional manner, thus showing better enhancement performance. Our framework
first employs a structure and illumination classification (SIC) module to learn
the degradation information adaptively. In SIC, we decompose an input image
into an illumination map and a reflectance map. A class prediction block is
then designed to classify the degradation information by calculating the
structure similarity scores on the reflectance map and mean square error on the
illumination map. As such, each input image can be divided into patches with
three enhancement difficulty levels. Then, a feature learning and fusion (FLF)
module is proposed to adaptively learn the feature information with CNNs for
different enhancement difficulty levels while learning the long-range
dependencies for the patches in a holistic manner. Experiments on five
benchmark datasets consistently show our ClassLIE achieves new state-of-the-art
performance, with 25.74 PSNR and 0.92 SSIM on the LOL dataset.
Related papers
- Semi-LLIE: Semi-supervised Contrastive Learning with Mamba-based Low-light Image Enhancement [59.17372460692809]
This work proposes a mean-teacher-based semi-supervised low-light enhancement (Semi-LLIE) framework that integrates the unpaired data into model training.
We introduce a semantic-aware contrastive loss to faithfully transfer the illumination distribution, contributing to enhancing images with natural colors.
We also propose novel perceptive loss based on the large-scale vision-language Recognize Anything Model (RAM) to help generate enhanced images with richer textual details.
arXiv Detail & Related papers (2024-09-25T04:05:32Z) - RSEND: Retinex-based Squeeze and Excitation Network with Dark Region Detection for Efficient Low Light Image Enhancement [1.7356500114422735]
We propose a more accurate, concise, and one-stage Retinex theory based framework, RSEND.
RSEND first divides the low-light image into the illumination map and reflectance map, then captures the important details in the illumination map and performs light enhancement.
Our Efficient Retinex model significantly outperforms other CNN-based models, achieving a PSNR improvement ranging from 0.44 dB to 4.2 dB in different datasets.
arXiv Detail & Related papers (2024-06-14T01:36:52Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - Low-Light Image Enhancement with Illumination-Aware Gamma Correction and
Complete Image Modelling Network [69.96295927854042]
Low-light environments usually lead to less informative large-scale dark areas.
We propose to integrate the effectiveness of gamma correction with the strong modelling capacities of deep networks.
Because exponential operation introduces high computational complexity, we propose to use Taylor Series to approximate gamma correction.
arXiv Detail & Related papers (2023-08-16T08:46:51Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - Retinexformer: One-stage Retinex-based Transformer for Low-light Image
Enhancement [96.09255345336639]
We formulate a principled One-stage Retinex-based Framework (ORF) to enhance low-light images.
ORF first estimates the illumination information to light up the low-light image and then restores the corruption to produce the enhanced image.
Our algorithm, Retinexformer, significantly outperforms state-of-the-art methods on thirteen benchmarks.
arXiv Detail & Related papers (2023-03-12T16:54:08Z) - LR-Net: A Block-based Convolutional Neural Network for Low-Resolution
Image Classification [0.0]
We develop a novel image classification architecture, composed of blocks that are designed to learn both low level and global features from noisy and low-resolution images.
Our design of the blocks was heavily influenced by Residual Connections and Inception modules in order to increase performance and reduce parameter sizes.
We have performed in-depth tests that demonstrate the presented architecture is faster and more accurate than existing cutting-edge convolutional neural networks.
arXiv Detail & Related papers (2022-07-19T20:01:11Z) - PIE-Net: Photometric Invariant Edge Guided Network for Intrinsic Image
Decomposition [17.008724191799313]
Intrinsic image decomposition is the process of recovering the image formation components (reflectance and shading) from an image.
In this paper, an end-to-end edge-driven hybrid CNN approach is proposed for intrinsic image decomposition.
arXiv Detail & Related papers (2022-03-30T20:46:15Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.