ALL-E: Aesthetics-guided Low-light Image Enhancement
- URL: http://arxiv.org/abs/2304.14610v2
- Date: Tue, 2 May 2023 15:31:38 GMT
- Title: ALL-E: Aesthetics-guided Low-light Image Enhancement
- Authors: Ling Li, Dong Liang, Yuanhang Gao, Sheng-Jun Huang, Songcan Chen
- Abstract summary: We propose a new paradigm, i.e. aesthetics-guided low-light image enhancement (ALL-E)
It introduces aesthetic preferences to LLE and motivates training in a reinforcement learning framework with an aesthetic reward.
Our results on various benchmarks demonstrate the superiority of ALL-E over state-of-the-art methods.
- Score: 45.40896781156727
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Evaluating the performance of low-light image enhancement (LLE) is highly
subjective, thus making integrating human preferences into image enhancement a
necessity. Existing methods fail to consider this and present a series of
potentially valid heuristic criteria for training enhancement models. In this
paper, we propose a new paradigm, i.e., aesthetics-guided low-light image
enhancement (ALL-E), which introduces aesthetic preferences to LLE and
motivates training in a reinforcement learning framework with an aesthetic
reward. Each pixel, functioning as an agent, refines itself by recursive
actions, i.e., its corresponding adjustment curve is estimated sequentially.
Extensive experiments show that integrating aesthetic assessment improves both
subjective experience and objective evaluation. Our results on various
benchmarks demonstrate the superiority of ALL-E over state-of-the-art methods.
Related papers
- Aligning Vision Models with Human Aesthetics in Retrieval: Benchmarks and Algorithms [91.19304518033144]
We aim to align vision models with human aesthetic standards in a retrieval system.
We propose a preference-based reinforcement learning method that fine-tunes the vision models to better align the vision models with human aesthetics.
arXiv Detail & Related papers (2024-06-13T17:59:20Z) - Confidence-aware Reward Optimization for Fine-tuning Text-to-Image Models [85.96013373385057]
Fine-tuning text-to-image models with reward functions trained on human feedback data has proven effective for aligning model behavior with human intent.
However, excessive optimization with such reward models, which serve as mere proxy objectives, can compromise the performance of fine-tuned models.
We propose TextNorm, a method that enhances alignment based on a measure of reward model confidence estimated across a set of semantically contrastive text prompts.
arXiv Detail & Related papers (2024-04-02T11:40:38Z) - VILA: Learning Image Aesthetics from User Comments with Vision-Language
Pretraining [53.470662123170555]
We propose learning image aesthetics from user comments, and exploring vision-language pretraining methods to learn multimodal aesthetic representations.
Specifically, we pretrain an image-text encoder-decoder model with image-comment pairs, using contrastive and generative objectives to learn rich and generic aesthetic semantics without human labels.
Our results show that our pretrained aesthetic vision-language model outperforms prior works on image aesthetic captioning over the AVA-Captions dataset.
arXiv Detail & Related papers (2023-03-24T23:57:28Z) - Enhancement by Your Aesthetic: An Intelligible Unsupervised Personalized
Enhancer for Low-Light Images [67.14410374622699]
We propose an intelligible unsupervised personalized enhancer (iUPEnhancer) for low-light images.
The proposed iUP-Enhancer is trained with the guidance of these correlations and the corresponding unsupervised loss functions.
Experiments demonstrate that the proposed algorithm produces competitive qualitative and quantitative results.
arXiv Detail & Related papers (2022-07-15T07:16:10Z) - UIF: An Objective Quality Assessment for Underwater Image Enhancement [17.145844358253164]
We propose an Underwater Image Fidelity (UIF) metric for objective evaluation of enhanced underwater images.
By exploiting the statistical features of these images, we present to extract naturalness-related, sharpness-related, and structure-related features.
Experimental results confirm that the proposed UIF outperforms a variety of underwater and general-purpose image quality metrics.
arXiv Detail & Related papers (2022-05-19T08:43:47Z) - The Loop Game: Quality Assessment and Optimization for Low-Light Image
Enhancement [50.29722732653095]
There is an increasing consensus that the design and optimization of low light image enhancement methods need to be fully driven by perceptual quality.
We propose a loop enhancement framework that produces a clear picture of how the enhancement of low-light images could be optimized towards better visual quality.
arXiv Detail & Related papers (2022-02-20T06:20:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.