Ultra-High-Definition Low-Light Image Enhancement: A Benchmark and
Transformer-Based Method
- URL: http://arxiv.org/abs/2212.11548v1
- Date: Thu, 22 Dec 2022 09:05:07 GMT
- Title: Ultra-High-Definition Low-Light Image Enhancement: A Benchmark and
Transformer-Based Method
- Authors: Tao Wang, Kaihao Zhang, Tianrun Shen, Wenhan Luo, Bjorn Stenger, Tong
Lu
- Abstract summary: We consider the task of low-light image enhancement (LLIE) and introduce a large-scale database consisting of images at 4K and 8K resolution.
We conduct systematic benchmarking studies and provide a comparison of current LLIE algorithms.
As a second contribution, we introduce LLFormer, a transformer-based low-light enhancement method.
- Score: 51.30748775681917
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As the quality of optical sensors improves, there is a need for processing
large-scale images. In particular, the ability of devices to capture ultra-high
definition (UHD) images and video places new demands on the image processing
pipeline. In this paper, we consider the task of low-light image enhancement
(LLIE) and introduce a large-scale database consisting of images at 4K and 8K
resolution. We conduct systematic benchmarking studies and provide a comparison
of current LLIE algorithms. As a second contribution, we introduce LLFormer, a
transformer-based low-light enhancement method. The core components of LLFormer
are the axis-based multi-head self-attention and cross-layer attention fusion
block, which significantly reduces the linear complexity. Extensive experiments
on the new dataset and existing public datasets show that LLFormer outperforms
state-of-the-art methods. We also show that employing existing LLIE methods
trained on our benchmark as a pre-processing step significantly improves the
performance of downstream tasks, e.g., face detection in low-light conditions.
The source code and pre-trained models are available at
https://github.com/TaoWangzj/LLFormer.
Related papers
- GLARE: Low Light Image Enhancement via Generative Latent Feature based Codebook Retrieval [80.96706764868898]
We present a new Low-light Image Enhancement (LLIE) network via Generative LAtent feature based codebook REtrieval (GLARE)
We develop a generative Invertible Latent Normalizing Flow (I-LNF) module to align the LL feature distribution to NL latent representations, guaranteeing the correct code retrieval in the codebook.
Experiments confirm the superior performance of GLARE on various benchmark datasets and real-world data.
arXiv Detail & Related papers (2024-07-17T09:40:15Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - Leveraging Representations from Intermediate Encoder-blocks for Synthetic Image Detection [13.840950434728533]
State-of-the-art Synthetic Image Detection (SID) research has led to strong evidence on the advantages of feature extraction from foundation models.
We leverage the image representations extracted by intermediate Transformer blocks of CLIP's image-encoder via a lightweight network.
Our method is compared against the state-of-the-art by evaluating it on 20 test datasets and exhibits an average +10.6% absolute performance improvement.
arXiv Detail & Related papers (2024-02-29T12:18:43Z) - A Non-Uniform Low-Light Image Enhancement Method with Multi-Scale
Attention Transformer and Luminance Consistency Loss [11.585269110131659]
Low-light image enhancement aims to improve the perception of images collected in dim environments.
Existing methods cannot adaptively extract the differentiated luminance information, which will easily cause over-exposure and under-exposure.
We propose a multi-scale attention Transformer named MSATr, which sufficiently extracts local and global features for light balance to improve the visual quality.
arXiv Detail & Related papers (2023-12-27T10:07:11Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - Improving Pixel-based MIM by Reducing Wasted Modeling Capability [77.99468514275185]
We propose a new method that explicitly utilizes low-level features from shallow layers to aid pixel reconstruction.
To the best of our knowledge, we are the first to systematically investigate multi-level feature fusion for isotropic architectures.
Our method yields significant performance gains, such as 1.2% on fine-tuning, 2.8% on linear probing, and 2.6% on semantic segmentation.
arXiv Detail & Related papers (2023-08-01T03:44:56Z) - Super-Resolution of License Plate Images Using Attention Modules and
Sub-Pixel Convolution Layers [3.8831062015253055]
We introduce a Single-Image Super-Resolution (SISR) approach to enhance the detection of structural and textural features in surveillance images.
Our approach incorporates sub-pixel convolution layers and a loss function that uses an Optical Character Recognition (OCR) model for feature extraction.
Our results show that our approach for reconstructing these low-resolution synthesized images outperforms existing ones in both quantitative and qualitative measures.
arXiv Detail & Related papers (2023-05-27T00:17:19Z) - Combining Attention Module and Pixel Shuffle for License Plate
Super-Resolution [3.8831062015253055]
This work focuses on license plate (LP) reconstruction in low-resolution and low-quality images.
We present a Single-Image Super-Resolution (SISR) approach that extends the attention/transformer module concept.
In our experiments, the proposed method outperformed the baselines both quantitatively and qualitatively.
arXiv Detail & Related papers (2022-10-30T13:05:07Z) - Toward Fast, Flexible, and Robust Low-Light Image Enhancement [87.27326390675155]
We develop a new Self-Calibrated Illumination (SCI) learning framework for fast, flexible, and robust brightening images in real-world low-light scenarios.
Considering the computational burden of the cascaded pattern, we construct the self-calibrated module which realizes the convergence between results of each stage.
We make comprehensive explorations to SCI's inherent properties including operation-insensitive adaptability and model-irrelevant generality.
arXiv Detail & Related papers (2022-04-21T14:40:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.