Low Light Image Enhancement via Global and Local Context Modeling
- URL: http://arxiv.org/abs/2101.00850v1
- Date: Mon, 4 Jan 2021 09:40:54 GMT
- Title: Low Light Image Enhancement via Global and Local Context Modeling
- Authors: Aditya Arora, Muhammad Haris, Syed Waqas Zamir, Munawar Hayat, Fahad
Shahbaz Khan, Ling Shao, Ming-Hsuan Yang
- Abstract summary: We introduce a context-aware deep network for low-light image enhancement.
First, it features a global context module that models spatial correlations to find complementary cues over full spatial domain.
Second, it introduces a dense residual block that captures local context with a relatively large receptive field.
- Score: 164.85287246243956
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Images captured under low-light conditions manifest poor visibility, lack
contrast and color vividness. Compared to conventional approaches, deep
convolutional neural networks (CNNs) perform well in enhancing images. However,
being solely reliant on confined fixed primitives to model dependencies,
existing data-driven deep models do not exploit the contexts at various spatial
scales to address low-light image enhancement. These contexts can be crucial
towards inferring several image enhancement tasks, e.g., local and global
contrast, brightness and color corrections; which requires cues from both local
and global spatial extent. To this end, we introduce a context-aware deep
network for low-light image enhancement. First, it features a global context
module that models spatial correlations to find complementary cues over full
spatial domain. Second, it introduces a dense residual block that captures
local context with a relatively large receptive field. We evaluate the proposed
approach using three challenging datasets: MIT-Adobe FiveK, LoL, and SID. On
all these datasets, our method performs favorably against the state-of-the-arts
in terms of standard image fidelity metrics. In particular, compared to the
best performing method on the MIT-Adobe FiveK dataset, our algorithm improves
PSNR from 23.04 dB to 24.45 dB.
Related papers
- United Domain Cognition Network for Salient Object Detection in Optical Remote Sensing Images [21.76732661032257]
We propose a novel United Domain Cognition Network (UDCNet) to jointly explore the global-local information in the frequency and spatial domains.
Experimental results demonstrate the superiority of the proposed UDCNet over 24 state-of-the-art models.
arXiv Detail & Related papers (2024-11-11T04:12:27Z) - Coherent and Multi-modality Image Inpainting via Latent Space Optimization [61.99406669027195]
PILOT (intextbfPainting vtextbfIa textbfLatent textbfOptextbfTimization) is an optimization approach grounded on a novel textitsemantic centralization and textitbackground preservation loss.
Our method searches latent spaces capable of generating inpainted regions that exhibit high fidelity to user-provided prompts while maintaining coherence with the background.
arXiv Detail & Related papers (2024-07-10T19:58:04Z) - Towards Robust Event-guided Low-Light Image Enhancement: A Large-Scale Real-World Event-Image Dataset and Novel Approach [7.974102031202597]
We propose a real-world (indoor and outdoor) dataset comprising over 30K pairs of images and events under both low and normal illumination conditions.
Based on the dataset, we propose a novel event-guided LIE approach, called EvLight, towards robust performance in real-world low-light scenes.
arXiv Detail & Related papers (2024-04-01T00:18:17Z) - Low-Light Image Enhancement with Illumination-Aware Gamma Correction and
Complete Image Modelling Network [69.96295927854042]
Low-light environments usually lead to less informative large-scale dark areas.
We propose to integrate the effectiveness of gamma correction with the strong modelling capacities of deep networks.
Because exponential operation introduces high computational complexity, we propose to use Taylor Series to approximate gamma correction.
arXiv Detail & Related papers (2023-08-16T08:46:51Z) - RSINet: Inpainting Remotely Sensed Images Using Triple GAN Framework [13.613245876782367]
We propose a novel inpainting method that individually focuses on each aspect of an image such as edges, colour and texture.
Each individual GAN also incorporates the attention mechanism that explicitly extracts the spectral and spatial features.
We evaluate our model, alongwith previous state of the art models, on the two well known remote sensing datasets, Open Cities AI and Earth on Canvas.
arXiv Detail & Related papers (2022-02-12T05:19:37Z) - Learning Neural Light Fields with Ray-Space Embedding Networks [51.88457861982689]
We propose a novel neural light field representation that is compact and directly predicts integrated radiance along rays.
Our method achieves state-of-the-art quality on dense forward-facing datasets such as the Stanford Light Field dataset.
arXiv Detail & Related papers (2021-12-02T18:59:51Z) - HCNet: Hierarchical Context Network for Semantic Segmentation [6.4047628200011815]
We propose a hierarchical context network to model homogeneous pixels with strong correlations and heterogeneous pixels with weak correlations.
Our approach realizes a mean IoU of 82.8% and overall accuracy of 91.4% on Cityscapes and ISPRS Vaihingen dataset.
arXiv Detail & Related papers (2020-10-10T09:51:17Z) - A U-Net Based Discriminator for Generative Adversarial Networks [86.67102929147592]
We propose an alternative U-Net based discriminator architecture for generative adversarial networks (GANs)
The proposed architecture allows to provide detailed per-pixel feedback to the generator while maintaining the global coherence of synthesized images.
The novel discriminator improves over the state of the art in terms of the standard distribution and image quality metrics.
arXiv Detail & Related papers (2020-02-28T11:16:54Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.