Towards Lightest Low-Light Image Enhancement Architecture for Mobile Devices
- URL: http://arxiv.org/abs/2507.04277v1
- Date: Sun, 06 Jul 2025 07:36:47 GMT
- Title: Towards Lightest Low-Light Image Enhancement Architecture for Mobile Devices
- Authors: Guangrui Bai, Hailong Yan, Wenhai Liu, Yahui Deng, Erbao Dong,
- Abstract summary: Real-time low-light image enhancement on mobile and embedded devices requires models that balance visual quality and computational efficiency.<n>We propose LiteIE, an ultra-lightweight unsupervised enhancement framework that eliminates dependence on large-scale supervision.<n> LiteIE runs at 30 FPS for 4K images with just 58 parameters, enabling real-time deployment on edge devices.
- Score: 3.7651572719063178
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real-time low-light image enhancement on mobile and embedded devices requires models that balance visual quality and computational efficiency. Existing deep learning methods often rely on large networks and labeled datasets, limiting their deployment on resource-constrained platforms. In this paper, we propose LiteIE, an ultra-lightweight unsupervised enhancement framework that eliminates dependence on large-scale supervision and generalizes well across diverse conditions. We design a backbone-agnostic feature extractor with only two convolutional layers to produce compact image features enhancement tensors. In addition, we develop a parameter-free Iterative Restoration Module, which reuses the extracted features to progressively recover fine details lost in earlier enhancement steps, without introducing any additional learnable parameters. We further propose an unsupervised training objective that integrates exposure control, edge-aware smoothness, and multi-scale color consistency losses. Experiments on the LOL dataset, LiteIE achieves 19.04 dB PSNR, surpassing SOTA by 1.4 dB while using only 0.07\% of its parameters. On a Snapdragon 8 Gen 3 mobile processor, LiteIE runs at 30 FPS for 4K images with just 58 parameters, enabling real-time deployment on edge devices. These results establish LiteIE as an efficient and practical solution for low-light enhancement on resource-limited platforms.
Related papers
- MobileIE: An Extremely Lightweight and Effective ConvNet for Real-Time Image Enhancement on Mobile Devices [30.034447271429034]
We introduce an extremely lightweight Convolutional Neural Network (CNN) framework with around 4K parameters.<n>We are the first to achieve real-time IE inference at up to 1,100 frames per second (FPS)
arXiv Detail & Related papers (2025-07-02T15:53:44Z) - LightFormer: A lightweight and efficient decoder for remote sensing image segmentation [12.003743832147403]
We introduce LightFormer, a lightweight decoder for time-critical tasks that involve unstructured targets.<n>LightFormer employs a feature-fusion and refinement module built on channel processing and a learnable gating mechanism to aggregate multi-scale, multi-range information efficiently.<n>On the ISPRS Vaihingen benchmark, LightFormer attains 99.9% of GLFFNet's mIoU while requiring only 14.7% of its FLOPs and 15.9% of its parameters.
arXiv Detail & Related papers (2025-04-15T03:25:39Z) - LightGen: Efficient Image Generation through Knowledge Distillation and Direct Preference Optimization [37.236005953016175]
LightGen is an efficient training paradigm for image generation models.<n>It distills knowledge from state-of-the-art (SOTA) text-to-image models into a compact Masked Autoregressive architecture.<n>Experiments confirm that LightGen achieves image generation quality comparable to SOTA models.
arXiv Detail & Related papers (2025-03-11T16:58:02Z) - Illuminating Darkness: Enhancing Real-world Low-light Scenes with Smartphone Images [47.39277249268179]
This dataset comprises 6,425 unique focus-aligned image pairs captured with smartphone sensors in dynamic settings under challenging lighting conditions (0.1--200 lux)<n>We extracted and refined around 180,000 non-overlapping patches from 6,025 collected scenes for training while reserving 400 pairs for benchmarking.<n>In addition to that, we collected 2,117 low-light scenes from different sources for extensive real-world aesthetic evaluation.
arXiv Detail & Related papers (2025-03-10T04:01:56Z) - Striving for Faster and Better: A One-Layer Architecture with Auto Re-parameterization for Low-Light Image Enhancement [50.93686436282772]
We aim to delve into the limits of image enhancers both from visual quality and computational efficiency.<n>By rethinking the task demands, we build an explicit connection, i.e., visual quality and computational efficiency are corresponding to model learning and structure design.<n>Ultimately, this achieves efficient low-light image enhancement using only a single convolutional layer, while maintaining excellent visual quality.
arXiv Detail & Related papers (2025-02-27T08:20:03Z) - BRIGHT-VO: Brightness-Guided Hybrid Transformer for Visual Odometry with Multi-modality Refinement Module [11.898515581215708]
Visual odometry (VO) plays a crucial role in autonomous driving, robotic navigation, and other related tasks.<n>We introduce BrightVO, a novel VO model based on Transformer architecture, which performs front-end visual feature extraction.<n>Using pose graph optimization, this module iteratively refines pose estimates to reduce errors and improve both accuracy and robustness.
arXiv Detail & Related papers (2025-01-15T08:50:52Z) - BVI-RLV: A Fully Registered Dataset and Benchmarks for Low-Light Video Enhancement [56.97766265018334]
This paper introduces a low-light video dataset, consisting of 40 scenes with various motion scenarios under two distinct low-lighting conditions.
We provide fully registered ground truth data captured in normal light using a programmable motorized dolly and refine it via an image-based approach for pixel-wise frame alignment across different light levels.
Our experimental results demonstrate the significance of fully registered video pairs for low-light video enhancement (LLVE) and the comprehensive evaluation shows that the models trained with our dataset outperform those trained with the existing datasets.
arXiv Detail & Related papers (2024-07-03T22:41:49Z) - Ultra-High-Definition Low-Light Image Enhancement: A Benchmark and
Transformer-Based Method [51.30748775681917]
We consider the task of low-light image enhancement (LLIE) and introduce a large-scale database consisting of images at 4K and 8K resolution.
We conduct systematic benchmarking studies and provide a comparison of current LLIE algorithms.
As a second contribution, we introduce LLFormer, a transformer-based low-light enhancement method.
arXiv Detail & Related papers (2022-12-22T09:05:07Z) - Data-Model-Circuit Tri-Design for Ultra-Light Video Intelligence on Edge
Devices [90.30316433184414]
We propose a data-model-hardware tri-design framework for high- throughput, low-cost, and high-accuracy MOT on HD video stream.
Compared to the state-of-the-art MOT baseline, our tri-design approach can achieve 12.5x latency reduction, 20.9x effective frame rate improvement, 5.83x lower power, and 9.78x better energy efficiency, without much accuracy drop.
arXiv Detail & Related papers (2022-10-16T16:21:40Z) - Toward Fast, Flexible, and Robust Low-Light Image Enhancement [87.27326390675155]
We develop a new Self-Calibrated Illumination (SCI) learning framework for fast, flexible, and robust brightening images in real-world low-light scenarios.
Considering the computational burden of the cascaded pattern, we construct the self-calibrated module which realizes the convergence between results of each stage.
We make comprehensive explorations to SCI's inherent properties including operation-insensitive adaptability and model-irrelevant generality.
arXiv Detail & Related papers (2022-04-21T14:40:32Z) - Learning Deep Context-Sensitive Decomposition for Low-Light Image
Enhancement [58.72667941107544]
A typical framework is to simultaneously estimate the illumination and reflectance, but they disregard the scene-level contextual information encapsulated in feature spaces.
We develop a new context-sensitive decomposition network architecture to exploit the scene-level contextual dependencies on spatial scales.
We develop a lightweight CSDNet (named LiteCSDNet) by reducing the number of channels.
arXiv Detail & Related papers (2021-12-09T06:25:30Z) - A Retinex based GAN Pipeline to Utilize Paired and Unpaired Datasets for
Enhancing Low Light Images [0.0]
This paper presents a novel deep learning pipeline that can learn from both paired and unpaired datasets.
CNNs that are optimized to minimize standard loss, and Generative Adversarial Networks (GANs) that are optimized to minimize the adversarial loss are used.
arXiv Detail & Related papers (2020-06-27T07:12:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.