Enhancement by Your Aesthetic: An Intelligible Unsupervised Personalized
Enhancer for Low-Light Images
- URL: http://arxiv.org/abs/2207.07317v1
- Date: Fri, 15 Jul 2022 07:16:10 GMT
- Title: Enhancement by Your Aesthetic: An Intelligible Unsupervised Personalized
Enhancer for Low-Light Images
- Authors: Naishan Zheng, Jie Huang, Qi Zhu, Man Zhou, Feng Zhao, Zheng-Jun Zha
- Abstract summary: We propose an intelligible unsupervised personalized enhancer (iUPEnhancer) for low-light images.
The proposed iUP-Enhancer is trained with the guidance of these correlations and the corresponding unsupervised loss functions.
Experiments demonstrate that the proposed algorithm produces competitive qualitative and quantitative results.
- Score: 67.14410374622699
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-light image enhancement is an inherently subjective process whose targets
vary with the user's aesthetic. Motivated by this, several personalized
enhancement methods have been investigated. However, the enhancement process
based on user preferences in these techniques is invisible, i.e., a "black
box". In this work, we propose an intelligible unsupervised personalized
enhancer (iUPEnhancer) for low-light images, which establishes the correlations
between the low-light and the unpaired reference images with regard to three
user-friendly attributions (brightness, chromaticity, and noise). The proposed
iUP-Enhancer is trained with the guidance of these correlations and the
corresponding unsupervised loss functions. Rather than a "black box" process,
our iUP-Enhancer presents an intelligible enhancement process with the above
attributions. Extensive experiments demonstrate that the proposed algorithm
produces competitive qualitative and quantitative results while maintaining
excellent flexibility and scalability. This can be validated by personalization
with single/multiple references, cross-attribution references, or merely
adjusting parameters.
Related papers
- Unsupervised Low-light Image Enhancement with Lookup Tables and Diffusion Priors [38.96909959677438]
Low-light image enhancement (LIE) aims at precisely and efficiently recovering an image degraded in poor illumination environments.
Recent advanced LIE techniques are using deep neural networks, which require lots of low-normal light image pairs, network parameters, and computational resources.
We devise a novel unsupervised LIE framework based on diffusion priors and lookup tables to achieve efficient low-light image recovery.
arXiv Detail & Related papers (2024-09-27T16:37:27Z) - Unsupervised Image Prior via Prompt Learning and CLIP Semantic Guidance for Low-Light Image Enhancement [25.97198463881292]
We propose to improve the zero-reference low-light enhancement method by leveraging the rich visual-linguistic CLIP prior.
We show that the proposed method leads to consistent improvements across various datasets regarding task-based performance.
arXiv Detail & Related papers (2024-05-19T08:06:14Z) - Debiasing Multimodal Large Language Models [61.6896704217147]
Large Vision-Language Models (LVLMs) have become indispensable tools in computer vision and natural language processing.
Our investigation reveals a noteworthy bias in the generated content, where the output is primarily influenced by the underlying Large Language Models (LLMs) prior to the input image.
To rectify these biases and redirect the model's focus toward vision information, we introduce two simple, training-free strategies.
arXiv Detail & Related papers (2024-03-08T12:35:07Z) - Enlighten-Your-Voice: When Multimodal Meets Zero-shot Low-light Image
Enhancement [25.073590934451055]
"Enlighten-Your-Voice" is a multimodal enhancement framework that innovatively enriches user interaction through voice and textual commands.
Our model is equipped with a Dual Collaborative Attention Module (DCAM) that meticulously caters to distinct content and color discrepancies.
"Enlighten-Your-Voice" showcases remarkable generalization in unsupervised zero-shot scenarios.
arXiv Detail & Related papers (2023-12-15T06:57:05Z) - Empowering Low-Light Image Enhancer through Customized Learnable Priors [40.83461757842304]
We propose a paradigm for low-light image enhancement that explores the potential of customized learnable priors.
Motivated by the powerful feature representation capability of Masked Autoencoder (MAE), we customize MAE-based illumination and noise priors.
arXiv Detail & Related papers (2023-09-05T05:20:11Z) - Advancing Unsupervised Low-light Image Enhancement: Noise Estimation, Illumination Interpolation, and Self-Regulation [55.07472635587852]
Low-Light Image Enhancement (LLIE) techniques have made notable advancements in preserving image details and enhancing contrast.
These approaches encounter persistent challenges in efficiently mitigating dynamic noise and accommodating diverse low-light scenarios.
We first propose a method for estimating the noise level in low light images in a quick and accurate way.
We then devise a Learnable Illumination Interpolator (LII) to satisfy general constraints between illumination and input.
arXiv Detail & Related papers (2023-05-17T13:56:48Z) - ALL-E: Aesthetics-guided Low-light Image Enhancement [45.40896781156727]
We propose a new paradigm, i.e. aesthetics-guided low-light image enhancement (ALL-E)
It introduces aesthetic preferences to LLE and motivates training in a reinforcement learning framework with an aesthetic reward.
Our results on various benchmarks demonstrate the superiority of ALL-E over state-of-the-art methods.
arXiv Detail & Related papers (2023-04-28T03:34:10Z) - Gap-closing Matters: Perceptual Quality Evaluation and Optimization of Low-Light Image Enhancement [55.8106019031768]
There is a growing consensus in the research community that the optimization of low-light image enhancement approaches should be guided by the visual quality perceived by end users.
We propose a gap-closing framework for assessing subjective and objective quality systematically.
We validate the effectiveness of our proposed framework through both the accuracy of quality prediction and the perceptual quality of image enhancement.
arXiv Detail & Related papers (2023-02-22T15:57:03Z) - Toward Fast, Flexible, and Robust Low-Light Image Enhancement [87.27326390675155]
We develop a new Self-Calibrated Illumination (SCI) learning framework for fast, flexible, and robust brightening images in real-world low-light scenarios.
Considering the computational burden of the cascaded pattern, we construct the self-calibrated module which realizes the convergence between results of each stage.
We make comprehensive explorations to SCI's inherent properties including operation-insensitive adaptability and model-irrelevant generality.
arXiv Detail & Related papers (2022-04-21T14:40:32Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.