Gap-closing Matters: Perceptual Quality Evaluation and Optimization of Low-Light Image Enhancement
- URL: http://arxiv.org/abs/2302.11464v5
- Date: Fri, 21 Jun 2024 01:52:16 GMT
- Title: Gap-closing Matters: Perceptual Quality Evaluation and Optimization of Low-Light Image Enhancement
- Authors: Baoliang Chen, Lingyu Zhu, Hanwei Zhu, Wenhan Yang, Linqi Song, Shiqi Wang,
- Abstract summary: There is a growing consensus in the research community that the optimization of low-light image enhancement approaches should be guided by the visual quality perceived by end users.
We propose a gap-closing framework for assessing subjective and objective quality systematically.
We validate the effectiveness of our proposed framework through both the accuracy of quality prediction and the perceptual quality of image enhancement.
- Score: 55.8106019031768
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There is a growing consensus in the research community that the optimization of low-light image enhancement approaches should be guided by the visual quality perceived by end users. Despite the substantial efforts invested in the design of low-light enhancement algorithms, there has been comparatively limited focus on assessing subjective and objective quality systematically. To mitigate this gap and provide a clear path towards optimizing low-light image enhancement for better visual quality, we propose a gap-closing framework. In particular, our gap-closing framework starts with the creation of a large-scale dataset for Subjective QUality Assessment of REconstructed LOw-Light Images (SQUARE-LOL). This database serves as the foundation for studying the quality of enhanced images and conducting a comprehensive subjective user study. Subsequently, we propose an objective quality assessment measure that plays a critical role in bridging the gap between visual quality and enhancement. Finally, we demonstrate that our proposed objective quality measure can be incorporated into the process of optimizing the learning of the enhancement model toward perceptual optimality. We validate the effectiveness of our proposed framework through both the accuracy of quality prediction and the perceptual quality of image enhancement. Our database and codes are publicly available at https://github.com/Baoliang93/IACA_For_Lowlight_IQA.
Related papers
- Q-Ground: Image Quality Grounding with Large Multi-modality Models [61.72022069880346]
We introduce Q-Ground, the first framework aimed at tackling fine-scale visual quality grounding.
Q-Ground combines large multi-modality models with detailed visual quality analysis.
Central to our contribution is the introduction of the QGround-100K dataset.
arXiv Detail & Related papers (2024-07-24T06:42:46Z) - G-Refine: A General Quality Refiner for Text-to-Image Generation [74.16137826891827]
We introduce G-Refine, a general image quality refiner designed to enhance low-quality images without compromising integrity of high-quality ones.
The model is composed of three interconnected modules: a perception quality indicator, an alignment quality indicator, and a general quality enhancement module.
Extensive experimentation reveals that AIGIs after G-Refine outperform in 10+ quality metrics across 4 databases.
arXiv Detail & Related papers (2024-04-29T00:54:38Z) - Blind Image Quality Assessment Using Multi-Stream Architecture with Spatial and Channel Attention [4.983104446206061]
BIQA (Blind Image Quality Assessment) is an important field of study that evaluates images automatically.
Most algorithms generate quality without emphasizing the important region of interest.
A multi-stream spatial and channel attention-based algorithm is being proposed to solve this problem.
arXiv Detail & Related papers (2023-07-19T09:36:08Z) - Blind Multimodal Quality Assessment: A Brief Survey and A Case Study of
Low-light Images [73.27643795557778]
Blind image quality assessment (BIQA) aims at automatically and accurately forecasting objective scores for visual signals.
Recent developments in this field are dominated by unimodal solutions inconsistent with human subjective rating patterns.
We present a unique blind multimodal quality assessment (BMQA) of low-light images from subjective evaluation to objective score.
arXiv Detail & Related papers (2023-03-18T09:04:55Z) - Dehazed Image Quality Evaluation: From Partial Discrepancy to Blind
Perception [35.257798506356814]
Image dehazing aims to restore spatial details from hazy images.
We propose a Reduced-Reference dehazed image quality evaluation approach based on Partial Discrepancy.
We extend it to a No-Reference quality assessment metric with Blind Perception.
arXiv Detail & Related papers (2022-11-22T23:49:14Z) - The Loop Game: Quality Assessment and Optimization for Low-Light Image
Enhancement [50.29722732653095]
There is an increasing consensus that the design and optimization of low light image enhancement methods need to be fully driven by perceptual quality.
We propose a loop enhancement framework that produces a clear picture of how the enhancement of low-light images could be optimized towards better visual quality.
arXiv Detail & Related papers (2022-02-20T06:20:06Z) - Deep Multi-Scale Features Learning for Distorted Image Quality
Assessment [20.7146855562825]
Existing deep neural networks (DNNs) have shown significant effectiveness for tackling the IQA problem.
We propose to use pyramid features learning to build a DNN with hierarchical multi-scale features for distorted image quality prediction.
Our proposed network is optimized in a deep end-to-end supervision manner.
arXiv Detail & Related papers (2020-12-01T23:39:01Z) - Object-QA: Towards High Reliable Object Quality Assessment [71.71188284059203]
In object recognition applications, object images usually appear with different quality levels.
We propose an effective approach named Object-QA to assess high-reliable quality scores for object images.
arXiv Detail & Related papers (2020-05-27T01:46:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.