Rethinking Theoretical Illumination for Efficient Low-Light Image Enhancement
- URL: http://arxiv.org/abs/2409.05274v4
- Date: Sat, 09 Aug 2025 08:45:18 GMT
- Title: Rethinking Theoretical Illumination for Efficient Low-Light Image Enhancement
- Authors: Shyang-En Weng, Cheng-Yen Hsiao, Li-Wei Lu, Yu-Shen Huang, Tzu-Han Chen, Shaou-Gang Miaou, Ricky Christanto,
- Abstract summary: This article introduces an extended version of the Channel-Prior and Gamma-Estimation Network (CPGA-Net), termed CPGA-Net+.<n>We introduce both an ultra-lightweight and a stronger version, following the same design principles.<n>Our proposed methods have been validated as effective compared to recent lightweight approaches.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Enhancing low-light images remains a critical challenge in computer vision, as does designing lightweight models for edge devices that can handle the computational demands of deep learning. This article introduces an extended version of the Channel-Prior and Gamma-Estimation Network (CPGA-Net), termed CPGA-Net+, incorporating the theoretically-based Attentions for illumination in local and global processing. Additionally, we assess our approach through a theoretical analysis of the block design by introducing both an ultra-lightweight and a stronger version, following the same design principles. The lightweight version significantly reduces computational costs by over two-thirds by utilizing the local branch as an auxiliary component. Meanwhile, the stronger version achieves an impressive balance by maximizing local and global processing capabilities. Our proposed methods have been validated as effective compared to recent lightweight approaches, offering superior performance and scalable solutions with limited computational resources.
Related papers
- PuriLight: A Lightweight Shuffle and Purification Framework for Monocular Depth Estimation [15.413017422345545]
PuriLight is a framework for self-supervised monocular depth estimation.<n>It addresses the dual challenges of computational efficiency and detail preservation.<n>PuriLight achieves state-of-the-art performance with minimal training parameters.
arXiv Detail & Related papers (2026-02-11T17:35:21Z) - Low-Light Enhancement via Encoder-Decoder Network with Illumination Guidance [0.0]
This paper introduces a novel deep learning framework for low-light image enhancement, named the.<n>the-Decoder Network with Illumination Guidance (EDNIG)<n>EDNIG integrates an illumination map, derived from Bright Channel Prior (BCP), as a guidance input.<n>It is optimized within a Generative Adversarial Network (GAN) framework using a composite loss function that combines adversarial loss, pixel-wise mean squared error (MSE), and perceptual loss.
arXiv Detail & Related papers (2025-07-04T09:35:00Z) - Latent Wavelet Diffusion For Ultra-High-Resolution Image Synthesis [56.311477476580926]
We present Latent Wavelet Diffusion (LWD), a lightweight training framework that significantly improves detail and texture fidelity in ultra-high-resolution (2K-4K) image synthesis.<n>LWD introduces a novel, frequency-aware masking strategy derived from wavelet energy maps, which dynamically focuses the training process on detail-rich regions of the latent space.
arXiv Detail & Related papers (2025-05-31T07:28:32Z) - Entropy-Driven Genetic Optimization for Deep-Feature-Guided Low-Light Image Enhancement [1.0428401220897083]
We propose a novel, unsupervised, fuzzy-inspired image enhancement framework guided by NSGA-II algorithm.<n>We use a GPU-accelerated NSGA-II algorithm that balances multiple objectives, namely, increasing image entropy, improving perceptual similarity, and maintaining appropriate brightness.<n>Our model achieves excellent performance with average BRISQUE and NIQE scores of 19.82 and 3.652, respectively, in all unpaired datasets.
arXiv Detail & Related papers (2025-05-16T13:40:56Z) - Striving for Faster and Better: A One-Layer Architecture with Auto Re-parameterization for Low-Light Image Enhancement [50.93686436282772]
We aim to delve into the limits of image enhancers both from visual quality and computational efficiency.
By rethinking the task demands, we build an explicit connection, i.e., visual quality and computational efficiency are corresponding to model learning and structure design.
Ultimately, this achieves efficient low-light image enhancement using only a single convolutional layer, while maintaining excellent visual quality.
arXiv Detail & Related papers (2025-02-27T08:20:03Z) - CLIP-Optimized Multimodal Image Enhancement via ISP-CNN Fusion for Coal Mine IoVT under Uneven Illumination [40.70282870053005]
Low illumination and uneven brightness in underground environments significantly degrade image quality.
We propose a multimodal image enhancement method tailored for coal mine IoVT, utilizing an ISP-CNN fusion architecture optimized for uneven illumination.
arXiv Detail & Related papers (2025-02-26T05:09:40Z) - Hardware-Efficient Photonic Tensor Core: Accelerating Deep Neural Networks with Structured Compression [15.665630650382226]
We introduce a block-circulant photonic tensor core for a structure-compressed optical neural network (StrC-ONN) architecture.<n>This work explores a new pathway toward practical and scalable ONNs, highlighting a promising route to address future computational efficiency challenges.
arXiv Detail & Related papers (2025-02-01T17:03:45Z) - PhotoGAN: Generative Adversarial Neural Network Acceleration with Silicon Photonics [2.9699290794642366]
PhotoGAN is the first silicon-photonic accelerator designed to handle the specialized operations of GAN models.
PhotoGAN achieves at least 4.4x higher GOPS and 2.18x lower energy-per-bit (EPB) compared to state-of-the-art accelerators.
arXiv Detail & Related papers (2025-01-23T16:53:31Z) - A Lightweight GAN-Based Image Fusion Algorithm for Visible and Infrared Images [4.473596922028091]
This paper presents a lightweight image fusion algorithm specifically designed for merging visible light and infrared images.
The proposed method enhances the generator in a Generative Adversarial Network (GAN) by integrating the Convolutional Block Attention Module.
Experiments using the M3FD dataset demonstrate that the proposed algorithm outperforms similar image fusion methods in terms of fusion quality.
arXiv Detail & Related papers (2024-09-07T18:04:39Z) - A Lightweight Low-Light Image Enhancement Network via Channel Prior and Gamma Correction [0.0]
Low-light image enhancement (LLIE) refers to image enhancement technology tailored to handle low-light scenes.
We introduce CPGA-Net, an innovative LLIE network that combines dark/bright channel priors and gamma correction via deep learning.
arXiv Detail & Related papers (2024-02-28T08:18:20Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - Low-Resolution Self-Attention for Semantic Segmentation [93.30597515880079]
We introduce the Low-Resolution Self-Attention (LRSA) mechanism to capture global context at a significantly reduced computational cost.<n>Our approach involves computing self-attention in a fixed low-resolution space regardless of the input image's resolution.<n>We demonstrate the effectiveness of our LRSA approach by building the LRFormer, a vision transformer with an encoder-decoder structure.
arXiv Detail & Related papers (2023-10-08T06:10:09Z) - HAT: Hybrid Attention Transformer for Image Restoration [61.74223315807691]
Transformer-based methods have shown impressive performance in image restoration tasks, such as image super-resolution and denoising.
We propose a new Hybrid Attention Transformer (HAT) to activate more input pixels for better restoration.
Our HAT achieves state-of-the-art performance both quantitatively and qualitatively.
arXiv Detail & Related papers (2023-09-11T05:17:55Z) - Low-Light Image Enhancement with Illumination-Aware Gamma Correction and
Complete Image Modelling Network [69.96295927854042]
Low-light environments usually lead to less informative large-scale dark areas.
We propose to integrate the effectiveness of gamma correction with the strong modelling capacities of deep networks.
Because exponential operation introduces high computational complexity, we propose to use Taylor Series to approximate gamma correction.
arXiv Detail & Related papers (2023-08-16T08:46:51Z) - Latent Graph Attention for Enhanced Spatial Context [17.80084080253724]
Latent Graph Attention (LGA) is a computationally inexpensive (linear to the number of nodes) and stable, modular framework for incorporating the global context in the existing architectures.
LGA propagates information spatially using a network of locally connected graphs.
We show that incorporating LGA improves the performance on three challenging applications, namely transparent object segmentation, image restoration for dehazing and optical flow estimation.
arXiv Detail & Related papers (2023-07-09T10:56:44Z) - Generative Adversarial Super-Resolution at the Edge with Knowledge
Distillation [1.3764085113103222]
Single-Image Super-Resolution can support robotic tasks in environments where a reliable visual stream is required.
We propose an efficient Generative Adversarial Network model for real-time Super-Resolution, called EdgeSRGAN.
arXiv Detail & Related papers (2022-09-07T10:58:41Z) - Cycle-Interactive Generative Adversarial Network for Robust Unsupervised
Low-Light Enhancement [109.335317310485]
Cycle-Interactive Generative Adversarial Network (CIGAN) is capable of not only better transferring illumination distributions between low/normal-light images but also manipulating detailed signals.
In particular, the proposed low-light guided transformation feed-forwards the features of low-light images from the generator of enhancement GAN into the generator of degradation GAN.
arXiv Detail & Related papers (2022-07-03T06:37:46Z) - Low-light Image Enhancement by Retinex Based Algorithm Unrolling and
Adjustment [50.13230641857892]
We propose a new deep learning framework for the low-light image enhancement (LIE) problem.
The proposed framework contains a decomposition network inspired by algorithm unrolling, and adjustment networks considering both global brightness and local brightness sensitivity.
Experiments on a series of typical LIE datasets demonstrated the effectiveness of the proposed method, both quantitatively and visually, as compared with existing methods.
arXiv Detail & Related papers (2022-02-12T03:59:38Z) - Learning Deep Context-Sensitive Decomposition for Low-Light Image
Enhancement [58.72667941107544]
A typical framework is to simultaneously estimate the illumination and reflectance, but they disregard the scene-level contextual information encapsulated in feature spaces.
We develop a new context-sensitive decomposition network architecture to exploit the scene-level contextual dependencies on spatial scales.
We develop a lightweight CSDNet (named LiteCSDNet) by reducing the number of channels.
arXiv Detail & Related papers (2021-12-09T06:25:30Z) - Image-specific Convolutional Kernel Modulation for Single Image
Super-resolution [85.09413241502209]
In this issue, we propose a novel image-specific convolutional modulation kernel (IKM)
We exploit the global contextual information of image or feature to generate an attention weight for adaptively modulating the convolutional kernels.
Experiments on single image super-resolution show that the proposed methods achieve superior performances over state-of-the-art methods.
arXiv Detail & Related papers (2021-11-16T11:05:10Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z) - Improving Aerial Instance Segmentation in the Dark with Self-Supervised
Low Light Enhancement [6.500738558466833]
Low light conditions in aerial images adversely affect the performance of vision based applications.
We propose a new method that is capable of enhancing the low light image in a self-supervised fashion.
We also propose the generation of a new low light aerial dataset using GANs.
arXiv Detail & Related papers (2021-02-10T12:24:40Z) - Optimization-Inspired Learning with Architecture Augmentations and
Control Mechanisms for Low-Level Vision [74.9260745577362]
This paper proposes a unified optimization-inspired learning framework to aggregate Generative, Discriminative, and Corrective (GDC) principles.
We construct three propagative modules to effectively solve the optimization models with flexible combinations.
Experiments across varied low-level vision tasks validate the efficacy and adaptability of GDC.
arXiv Detail & Related papers (2020-12-10T03:24:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.