PromptLNet: Region-Adaptive Aesthetic Enhancement via Prompt Guidance in Low-Light Enhancement Net
- URL: http://arxiv.org/abs/2503.08276v1
- Date: Tue, 11 Mar 2025 10:45:08 GMT
- Title: PromptLNet: Region-Adaptive Aesthetic Enhancement via Prompt Guidance in Low-Light Enhancement Net
- Authors: Jun Yin, Yangfan He, Miao Zhang, Pengyu Zeng, Tianyi Wang, Shuai Lu, Xueqian Wang,
- Abstract summary: We train a low-light image aesthetic evaluation model using text pairs and aesthetic scores from multiple low-light image datasets.<n>We propose a prompt-driven brightness adjustment module capable of performing fine-grained brightness and aesthetic adjustments for specific instances or regions.<n> Experimental results show that our method not only outperforms traditional methods in terms of visual quality but also provides greater flexibility and controllability.
- Score: 28.970689854467764
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning and improving large language models through human preference feedback has become a mainstream approach, but it has rarely been applied to the field of low-light image enhancement. Existing low-light enhancement evaluations typically rely on objective metrics (such as FID, PSNR, etc.), which often result in models that perform well objectively but lack aesthetic quality. Moreover, most low-light enhancement models are primarily designed for global brightening, lacking detailed refinement. Therefore, the generated images often require additional local adjustments, leading to research gaps in practical applications. To bridge this gap, we propose the following innovations: 1) We collect human aesthetic evaluation text pairs and aesthetic scores from multiple low-light image datasets (e.g., LOL, LOL2, LOM, DCIM, MEF, etc.) to train a low-light image aesthetic evaluation model, supplemented by an optimization algorithm designed to fine-tune the diffusion model. 2) We propose a prompt-driven brightness adjustment module capable of performing fine-grained brightness and aesthetic adjustments for specific instances or regions. 3) We evaluate our method alongside existing state-of-the-art algorithms on mainstream benchmarks. Experimental results show that our method not only outperforms traditional methods in terms of visual quality but also provides greater flexibility and controllability, paving the way for improved aesthetic quality.
Related papers
- Recognition-Oriented Low-Light Image Enhancement based on Global and Pixelwise Optimization [0.4951599300340954]
We propose a novel low-light image enhancement method aimed at improving the performance of recognition models.<n>The proposed method can be applied as a filter to improve low-light recognition performance without requiring retraining downstream recognition models.
arXiv Detail & Related papers (2025-01-08T01:09:49Z) - Aligning Vision Models with Human Aesthetics in Retrieval: Benchmarks and Algorithms [91.19304518033144]
We aim to align vision models with human aesthetic standards in a retrieval system.
We propose a preference-based reinforcement learning method that fine-tunes the vision models to better align the vision models with human aesthetics.
arXiv Detail & Related papers (2024-06-13T17:59:20Z) - Unsupervised Image Prior via Prompt Learning and CLIP Semantic Guidance for Low-Light Image Enhancement [25.97198463881292]
We propose to improve the zero-reference low-light enhancement method by leveraging the rich visual-linguistic CLIP prior.
We show that the proposed method leads to consistent improvements across various datasets regarding task-based performance.
arXiv Detail & Related papers (2024-05-19T08:06:14Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - Zero-Reference Lighting Estimation Diffusion Model for Low-Light Image Enhancement [2.9873893715462185]
We propose a novel zero-reference lighting estimation diffusion model for low-light image enhancement called Zero-LED.<n>It utilizes the stable convergence ability of diffusion models to bridge the gap between low-light domains and real normal-light domains.<n>It successfully alleviates the dependence on pairwise training data via zero-reference learning.
arXiv Detail & Related papers (2024-03-05T11:39:17Z) - ALL-E: Aesthetics-guided Low-light Image Enhancement [45.40896781156727]
We propose a new paradigm, i.e. aesthetics-guided low-light image enhancement (ALL-E)
It introduces aesthetic preferences to LLE and motivates training in a reinforcement learning framework with an aesthetic reward.
Our results on various benchmarks demonstrate the superiority of ALL-E over state-of-the-art methods.
arXiv Detail & Related papers (2023-04-28T03:34:10Z) - Gap-closing Matters: Perceptual Quality Evaluation and Optimization of Low-Light Image Enhancement [55.8106019031768]
There is a growing consensus in the research community that the optimization of low-light image enhancement approaches should be guided by the visual quality perceived by end users.
We propose a gap-closing framework for assessing subjective and objective quality systematically.
We validate the effectiveness of our proposed framework through both the accuracy of quality prediction and the perceptual quality of image enhancement.
arXiv Detail & Related papers (2023-02-22T15:57:03Z) - The Loop Game: Quality Assessment and Optimization for Low-Light Image
Enhancement [50.29722732653095]
There is an increasing consensus that the design and optimization of low light image enhancement methods need to be fully driven by perceptual quality.
We propose a loop enhancement framework that produces a clear picture of how the enhancement of low-light images could be optimized towards better visual quality.
arXiv Detail & Related papers (2022-02-20T06:20:06Z) - Optimization-Inspired Learning with Architecture Augmentations and
Control Mechanisms for Low-Level Vision [74.9260745577362]
This paper proposes a unified optimization-inspired learning framework to aggregate Generative, Discriminative, and Corrective (GDC) principles.
We construct three propagative modules to effectively solve the optimization models with flexible combinations.
Experiments across varied low-level vision tasks validate the efficacy and adaptability of GDC.
arXiv Detail & Related papers (2020-12-10T03:24:53Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.