Diffusion Models for Low-Light Image Enhancement: A Multi-Perspective Taxonomy and Performance Analysis
- URL: http://arxiv.org/abs/2510.05976v1
- Date: Tue, 07 Oct 2025 14:30:36 GMT
- Title: Diffusion Models for Low-Light Image Enhancement: A Multi-Perspective Taxonomy and Performance Analysis
- Authors: Eashan Adhikarla, Yixin Liu, Brian D. Davison,
- Abstract summary: Low-light image enhancement (LLIE) is vital for safety-critical applications such as surveillance, autonomous navigation, and medical imaging.<n> diffusion models have emerged as a promising generative paradigm for LLIE due to their capacity to model complex image distributions via iterative denoising.<n>This survey aims to guide the next generation of diffusion-based LLIE research by highlighting trends and surfacing open research questions.
- Score: 8.323736085126386
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Low-light image enhancement (LLIE) is vital for safety-critical applications such as surveillance, autonomous navigation, and medical imaging, where visibility degradation can impair downstream task performance. Recently, diffusion models have emerged as a promising generative paradigm for LLIE due to their capacity to model complex image distributions via iterative denoising. This survey provides an up-to-date critical analysis of diffusion models for LLIE, distinctively featuring an in-depth comparative performance evaluation against Generative Adversarial Network and Transformer-based state-of-the-art methods, a thorough examination of practical deployment challenges, and a forward-looking perspective on the role of emerging paradigms like foundation models. We propose a multi-perspective taxonomy encompassing six categories: Intrinsic Decomposition, Spectral & Latent, Accelerated, Guided, Multimodal, and Autonomous; that map enhancement methods across physical priors, conditioning schemes, and computational efficiency. Our taxonomy is grounded in a hybrid view of both the model mechanism and the conditioning signals. We evaluate qualitative failure modes, benchmark inconsistencies, and trade-offs between interpretability, generalization, and inference efficiency. We also discuss real-world deployment constraints (e.g., memory, energy use) and ethical considerations. This survey aims to guide the next generation of diffusion-based LLIE research by highlighting trends and surfacing open research questions, including novel conditioning, real-time adaptation, and the potential of foundation models.
Related papers
- Bias Detection and Rotation-Robustness Mitigation in Vision-Language Models and Generative Image Models [0.0]
Vision-Language Models (VLMs) and generative image models have achieved remarkable performance across multimodal tasks.<n>This work investigates bias propagation and robustness in state-of-the-art vision-language and generative models.<n>We propose rotation-robust mitigation strategies that combine data augmentation, representation alignment, and model-level regularization.
arXiv Detail & Related papers (2026-01-09T00:36:11Z) - Foundation Models and Transformers for Anomaly Detection: A Survey [2.3264194695971656]
The survey categorizes VAD methods into reconstruction-based, feature-based and zero/few-shot approaches.<n>Transformers and foundation models enable more robust, interpretable, and scalable anomaly detection solutions.
arXiv Detail & Related papers (2025-07-21T12:01:04Z) - Anomaly Detection and Generation with Diffusion Models: A Survey [51.61574868316922]
Anomaly detection (AD) plays a pivotal role across diverse domains, including cybersecurity, finance, healthcare, and industrial manufacturing.<n>Recent advancements in deep learning, specifically diffusion models (DMs), have sparked significant interest.<n>This survey aims to guide researchers and practitioners in leveraging DMs for innovative AD solutions across diverse applications.
arXiv Detail & Related papers (2025-06-11T03:29:18Z) - Consistent World Models via Foresight Diffusion [56.45012929930605]
We argue that a key bottleneck in learning consistent diffusion-based world models lies in the suboptimal predictive ability.<n>We propose Foresight Diffusion (ForeDiff), a diffusion-based world modeling framework that enhances consistency by decoupling condition understanding from target denoising.
arXiv Detail & Related papers (2025-05-22T10:01:59Z) - Methods and Trends in Detecting AI-Generated Images: A Comprehensive Review [0.17188280334580194]
Generative Adversarial Networks (GANs), Diffusion Models, and Variational Autoencoders (VAEs) have enabled the synthesis of high-quality multimedia data.<n>These advancements have also raised significant concerns regarding adversarial attacks, unethical usage, and societal harm.<n>This survey provides a comprehensive review of state-of-the-art techniques for detecting and classifying synthetic images generated by advanced generative AI models.
arXiv Detail & Related papers (2025-02-21T03:16:18Z) - Diffusion Models in Low-Level Vision: A Survey [82.77962165415153]
diffusion model-based solutions have emerged as widely acclaimed for their ability to produce samples of superior quality and diversity.<n>We present three generic diffusion modeling frameworks and explore their correlations with other deep generative models.<n>We summarize extended diffusion models applied in other tasks, including medical, remote sensing, and video scenarios.
arXiv Detail & Related papers (2024-06-17T01:49:27Z) - Bridging Generative and Discriminative Models for Unified Visual
Perception with Diffusion Priors [56.82596340418697]
We propose a simple yet effective framework comprising a pre-trained Stable Diffusion (SD) model containing rich generative priors, a unified head (U-head) capable of integrating hierarchical representations, and an adapted expert providing discriminative priors.
Comprehensive investigations unveil potential characteristics of Vermouth, such as varying granularity of perception concealed in latent variables at distinct time steps and various U-net stages.
The promising results demonstrate the potential of diffusion models as formidable learners, establishing their significance in furnishing informative and robust visual representations.
arXiv Detail & Related papers (2024-01-29T10:36:57Z) - Diffusion Models for Image Restoration and Enhancement: A Comprehensive Survey [73.86861112002593]
We present a comprehensive review of recent diffusion model-based methods on image restoration.<n>We classify and emphasize the innovative designs using diffusion models for both IR and blind/real-world IR.<n>We propose five potential and challenging directions for the future research of diffusion model-based IR.
arXiv Detail & Related papers (2023-08-18T08:40:38Z) - IRGen: Generative Modeling for Image Retrieval [82.62022344988993]
In this paper, we present a novel methodology, reframing image retrieval as a variant of generative modeling.
We develop our model, dubbed IRGen, to address the technical challenge of converting an image into a concise sequence of semantic units.
Our model achieves state-of-the-art performance on three widely-used image retrieval benchmarks and two million-scale datasets.
arXiv Detail & Related papers (2023-03-17T17:07:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.