MixNet: Towards Effective and Efficient UHD Low-Light Image Enhancement
- URL: http://arxiv.org/abs/2401.10666v1
- Date: Fri, 19 Jan 2024 12:40:54 GMT
- Title: MixNet: Towards Effective and Efficient UHD Low-Light Image Enhancement
- Authors: Chen Wu and Zhuoran Zheng and Xiuyi Jia and Wenqi Ren
- Abstract summary: We propose a novel low-light image enhancement (LLIE) method called MixNet, which is designed explicitly for UHD images.
To capture the long-range dependency of features without introducing excessive computational complexity, we present the Global Feature Modulation Layer (GFML)
In addition, we also design the Local Feature Modulation Layer (LFML) and Feed-forward Layer (FFL) to capture local features and transform features into a compact representation.
- Score: 45.801789547053026
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the continuous advancement of imaging devices, the prevalence of
Ultra-High-Definition (UHD) images is rising. Although many image restoration
methods have achieved promising results, they are not directly applicable to
UHD images on devices with limited computational resources due to the
inherently high computational complexity of UHD images. In this paper, we focus
on the task of low-light image enhancement (LLIE) and propose a novel LLIE
method called MixNet, which is designed explicitly for UHD images. To capture
the long-range dependency of features without introducing excessive
computational complexity, we present the Global Feature Modulation Layer
(GFML). GFML associates features from different views by permuting the feature
maps, enabling efficient modeling of long-range dependency. In addition, we
also design the Local Feature Modulation Layer (LFML) and Feed-forward Layer
(FFL) to capture local features and transform features into a compact
representation. This way, our MixNet achieves effective LLIE with few model
parameters and low computational complexity. We conducted extensive experiments
on both synthetic and real-world datasets, and the comprehensive results
demonstrate that our proposed method surpasses the performance of current
state-of-the-art methods. The code will be available at
\url{https://github.com/zzr-idam/MixNet}.
Related papers
- Towards Ultra-High-Definition Image Deraining: A Benchmark and An Efficient Method [42.331058889312466]
This paper contributes the first large-scale UHD image deraining dataset, 4K-Rain13k, that contains 13,000 image pairs at 4K resolution.
We develop an effective and efficient vision-based architecture (UDR-Mixer) to better solve this task.
arXiv Detail & Related papers (2024-05-27T11:45:08Z) - Latent Modulated Function for Computational Optimal Continuous Image Representation [20.678662838709542]
We propose a novel Latent Modulated Rendering (LMF) algorithm for continuous image representation.
We show that converting existing INR-based methods to LMF can reduce the computational cost by up to 99.9%.
Experiments demonstrate that converting existing INR-based methods to LMF can reduce inference by up to 57 times, and save up to 76% parameters.
arXiv Detail & Related papers (2024-04-25T09:30:38Z) - EPNet: An Efficient Pyramid Network for Enhanced Single-Image
Super-Resolution with Reduced Computational Requirements [12.439807086123983]
Single-image super-resolution (SISR) has seen significant advancements through the integration of deep learning.
This paper introduces a new Efficient Pyramid Network (EPNet) that harmoniously merges an Edge Split Pyramid Module (ESPM) with a Panoramic Feature Extraction Module (PFEM) to overcome the limitations of existing methods.
arXiv Detail & Related papers (2023-12-20T19:56:53Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - Efficient Model Agnostic Approach for Implicit Neural Representation
Based Arbitrary-Scale Image Super-Resolution [5.704360536038803]
Single image super-resolution (SISR) has experienced significant advancements, primarily driven by deep convolutional networks.
Traditional networks are limited to upscaling images to a fixed scale, leading to the utilization of implicit neural functions for generating arbitrarily scaled images.
We introduce a novel and efficient framework, the Mixture of Experts Implicit Super-Resolution (MoEISR), which enables super-resolution at arbitrary scales.
arXiv Detail & Related papers (2023-11-20T05:34:36Z) - Spatially-Adaptive Feature Modulation for Efficient Image
Super-Resolution [90.16462805389943]
We develop a spatially-adaptive feature modulation (SAFM) mechanism upon a vision transformer (ViT)-like block.
Proposed method is $3times$ smaller than state-of-the-art efficient SR methods.
arXiv Detail & Related papers (2023-02-27T14:19:31Z) - Ultra-High-Definition Low-Light Image Enhancement: A Benchmark and
Transformer-Based Method [51.30748775681917]
We consider the task of low-light image enhancement (LLIE) and introduce a large-scale database consisting of images at 4K and 8K resolution.
We conduct systematic benchmarking studies and provide a comparison of current LLIE algorithms.
As a second contribution, we introduce LLFormer, a transformer-based low-light enhancement method.
arXiv Detail & Related papers (2022-12-22T09:05:07Z) - ShuffleMixer: An Efficient ConvNet for Image Super-Resolution [88.86376017828773]
We propose ShuffleMixer, for lightweight image super-resolution that explores large convolution and channel split-shuffle operation.
Specifically, we develop a large depth-wise convolution and two projection layers based on channel splitting and shuffling as the basic component to mix features efficiently.
Experimental results demonstrate that the proposed ShuffleMixer is about 6x smaller than the state-of-the-art methods in terms of model parameters and FLOPs.
arXiv Detail & Related papers (2022-05-30T15:26:52Z) - Hybrid Pixel-Unshuffled Network for Lightweight Image Super-Resolution [64.54162195322246]
Convolutional neural network (CNN) has achieved great success on image super-resolution (SR)
Most deep CNN-based SR models take massive computations to obtain high performance.
We propose a novel Hybrid Pixel-Unshuffled Network (HPUN) by introducing an efficient and effective downsampling module into the SR task.
arXiv Detail & Related papers (2022-03-16T20:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.