Improving Low-Light Image Recognition Performance Based on Image-adaptive Learnable Module
- URL: http://arxiv.org/abs/2401.06438v2
- Date: Wed, 08 Jan 2025 01:11:07 GMT
- Title: Improving Low-Light Image Recognition Performance Based on Image-adaptive Learnable Module
- Authors: Seitaro Ono, Yuka Ogino, Takahiro Toizumi, Atsushi Ito, Masato Tsukada,
- Abstract summary: This study addresses the enhancement of recognition model performance in low-light conditions.<n>We propose an image-adaptive learnable module which apply appropriate image processing on input images.<n>Our proposed approach allows for the enhancement of recognition performance under low-light conditions by easily integrating as a front-end filter.
- Score: 0.4951599300340954
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, significant progress has been made in image recognition technology based on deep neural networks. However, improving recognition performance under low-light conditions remains a significant challenge. This study addresses the enhancement of recognition model performance in low-light conditions. We propose an image-adaptive learnable module which apply appropriate image processing on input images and a hyperparameter predictor to forecast optimal parameters used in the module. Our proposed approach allows for the enhancement of recognition performance under low-light conditions by easily integrating as a front-end filter without the need to retrain existing recognition models designed for low-light conditions. Through experiments, our proposed method demonstrates its contribution to enhancing image recognition performance under low-light conditions.
Related papers
- PromptLNet: Region-Adaptive Aesthetic Enhancement via Prompt Guidance in Low-Light Enhancement Net [28.970689854467764]
We train a low-light image aesthetic evaluation model using text pairs and aesthetic scores from multiple low-light image datasets.
We propose a prompt-driven brightness adjustment module capable of performing fine-grained brightness and aesthetic adjustments for specific instances or regions.
Experimental results show that our method not only outperforms traditional methods in terms of visual quality but also provides greater flexibility and controllability.
arXiv Detail & Related papers (2025-03-11T10:45:08Z) - Recognition-Oriented Low-Light Image Enhancement based on Global and Pixelwise Optimization [0.4951599300340954]
We propose a novel low-light image enhancement method aimed at improving the performance of recognition models.
The proposed method can be applied as a filter to improve low-light recognition performance without requiring retraining downstream recognition models.
arXiv Detail & Related papers (2025-01-08T01:09:49Z) - Leveraging Content and Context Cues for Low-Light Image Enhancement [25.97198463881292]
Low-light conditions have an adverse impact on machine cognition, limiting the performance of computer vision systems in real life.
We propose to improve the existing zero-reference low-light enhancement by leveraging the CLIP model to capture image prior and for semantic guidance.
We experimentally show, that the proposed prior and semantic guidance help to improve the overall image contrast and hue, as well as improve background-foreground discrimination.
arXiv Detail & Related papers (2024-12-10T17:32:09Z) - Unveiling Advanced Frequency Disentanglement Paradigm for Low-Light Image Enhancement [61.22119364400268]
We propose a novel low-frequency consistency method, facilitating improved frequency disentanglement optimization.
Noteworthy improvements are showcased across five popular benchmarks, with up to 7.68dB gains on PSNR achieved for six state-of-the-art models.
Our approach maintains efficiency with only 88K extra parameters, setting a new standard in the challenging realm of low-light image enhancement.
arXiv Detail & Related papers (2024-09-03T06:19:03Z) - Unsupervised Image Prior via Prompt Learning and CLIP Semantic Guidance for Low-Light Image Enhancement [25.97198463881292]
We propose to improve the zero-reference low-light enhancement method by leveraging the rich visual-linguistic CLIP prior.
We show that the proposed method leads to consistent improvements across various datasets regarding task-based performance.
arXiv Detail & Related papers (2024-05-19T08:06:14Z) - Inhomogeneous illumination image enhancement under ex-tremely low visibility condition [3.534798835599242]
Imaging through dense fog presents unique challenges, with essential visual information crucial for applications like object detection and recognition obscured, thereby hindering conventional image processing methods.
We introduce in this paper a novel method that adaptively filters background illumination based on Structural Differential and Integral Filtering (F) to enhance only vital signal information.
Our findings demonstrate that our proposed method significantly enhances signal clarity under extremely low visibility conditions and out-performs existing techniques, offering substantial improvements for deep fog imaging applications.
arXiv Detail & Related papers (2024-04-26T16:09:42Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - Seed Optimization with Frozen Generator for Superior Zero-shot Low-light
Enhancement [49.97304897798384]
We embed a pre-trained generator to Retinex model to produce reflectance maps with enhanced detail and vividness.
We introduce a novel optimization strategy, which backpropagates the gradients to the input seeds rather than the parameters of the low-light enhancement model.
Benefiting from the pre-trained knowledge and seed-optimization strategy, the low-light enhancement model can significantly regularize the realness and fidelity of the enhanced result.
arXiv Detail & Related papers (2024-02-15T04:06:18Z) - Diffusion in the Dark: A Diffusion Model for Low-Light Text Recognition [78.50328335703914]
Diffusion in the Dark (DiD) is a diffusion model for low-light image reconstruction for text recognition.
We demonstrate that DiD, without any task-specific optimization, can outperform SOTA low-light methods in low-light text recognition on real images.
arXiv Detail & Related papers (2023-03-07T23:52:51Z) - Low-light Image and Video Enhancement via Selective Manipulation of
Chromaticity [1.4680035572775534]
We present a simple yet effective approach for low-light image and video enhancement.
The above adaptivity allows us to avoid the costly step of low-light image decomposition into illumination and reflectance.
Our results on standard lowlight image datasets show the efficacy of our algorithm and its qualitative and quantitative superiority over several state-of-the-art techniques.
arXiv Detail & Related papers (2022-03-09T17:01:28Z) - The Loop Game: Quality Assessment and Optimization for Low-Light Image
Enhancement [50.29722732653095]
There is an increasing consensus that the design and optimization of low light image enhancement methods need to be fully driven by perceptual quality.
We propose a loop enhancement framework that produces a clear picture of how the enhancement of low-light images could be optimized towards better visual quality.
arXiv Detail & Related papers (2022-02-20T06:20:06Z) - Improving Aerial Instance Segmentation in the Dark with Self-Supervised
Low Light Enhancement [6.500738558466833]
Low light conditions in aerial images adversely affect the performance of vision based applications.
We propose a new method that is capable of enhancing the low light image in a self-supervised fashion.
We also propose the generation of a new low light aerial dataset using GANs.
arXiv Detail & Related papers (2021-02-10T12:24:40Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z) - Learning an Adaptive Model for Extreme Low-light Raw Image Processing [5.706764509663774]
We propose an adaptive low-light raw image enhancement network to improve image quality.
The proposed method has the lowest Noise Level Estimation (NLE) score compared with the state-of-the-art low-light algorithms.
The potential application in video processing is briefly discussed.
arXiv Detail & Related papers (2020-04-22T09:01:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.