Double Domain Guided Real-Time Low-Light Image Enhancement for
Ultra-High-Definition Transportation Surveillance
- URL: http://arxiv.org/abs/2309.08382v1
- Date: Fri, 15 Sep 2023 13:16:24 GMT
- Title: Double Domain Guided Real-Time Low-Light Image Enhancement for
Ultra-High-Definition Transportation Surveillance
- Authors: Jingxiang Qu, Ryan Wen Liu, Yuan Gao, Yu Guo, Fenghua Zhu, Fei-yue
Wang
- Abstract summary: This paper proposes a real-time low-light image enhancement network (DDNet) for ultra-high-definition (UHD) transportation surveillance.
In particular, the enhancement processing is divided into two subtasks (i.e., color enhancement and gradient enhancement) via the proposed coarse enhancement module and LoG-based gradient enhancement module.
Our DDNet provides superior enhancement quality and efficiency compared with the state-of-the-art methods.
- Score: 26.223557583420725
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real-time transportation surveillance is an essential part of the intelligent
transportation system (ITS). However, images captured under low-light
conditions often suffer the poor visibility with types of degradation, such as
noise interference and vague edge features, etc. With the development of
imaging devices, the quality of the visual surveillance data is continually
increasing, like 2K and 4K, which has more strict requirements on the
efficiency of image processing. To satisfy the requirements on both enhancement
quality and computational speed, this paper proposes a double domain guided
real-time low-light image enhancement network (DDNet) for ultra-high-definition
(UHD) transportation surveillance. Specifically, we design an encoder-decoder
structure as the main architecture of the learning network. In particular, the
enhancement processing is divided into two subtasks (i.e., color enhancement
and gradient enhancement) via the proposed coarse enhancement module (CEM) and
LoG-based gradient enhancement module (GEM), which are embedded in the
encoder-decoder structure. It enables the network to enhance the color and edge
features simultaneously. Through the decomposition and reconstruction on both
color and gradient domains, our DDNet can restore the detailed feature
information concealed by the darkness with better visual quality and
efficiency. The evaluation experiments on standard and transportation-related
datasets demonstrate that our DDNet provides superior enhancement quality and
efficiency compared with the state-of-the-art methods. Besides, the object
detection and scene segmentation experiments indicate the practical benefits
for higher-level image analysis under low-light environments in ITS.
Related papers
- CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - TransY-Net:Learning Fully Transformer Networks for Change Detection of
Remote Sensing Images [64.63004710817239]
We propose a novel Transformer-based learning framework named TransY-Net for remote sensing image CD.
It improves the feature extraction from a global view and combines multi-level visual features in a pyramid manner.
Our proposed method achieves a new state-of-the-art performance on four optical and two SAR image CD benchmarks.
arXiv Detail & Related papers (2023-10-22T07:42:19Z) - CDAN: Convolutional dense attention-guided network for low-light image enhancement [2.2530496464901106]
Low-light images pose challenges of diminished clarity, muted colors, and reduced details.
This paper introduces the Convolutional Dense Attention-guided Network (CDAN), a novel solution for enhancing low-light images.
CDAN integrates an autoencoder-based architecture with convolutional and dense blocks, complemented by an attention mechanism and skip connections.
arXiv Detail & Related papers (2023-08-24T16:22:05Z) - Transmission and Color-guided Network for Underwater Image Enhancement [8.894719412298397]
We propose an Adaptive Transmission and Dynamic Color guided network (named ATDCnet) for underwater image enhancement.
To exploit the knowledge of physics, we design an Adaptive Transmission-directed Module (ATM) to better guide the network.
To deal with the color deviation problem, we design a Dynamic Color-guided Module (DCM) to post-process the enhanced image color.
arXiv Detail & Related papers (2023-08-09T11:43:54Z) - DADFNet: Dual Attention and Dual Frequency-Guided Dehazing Network for
Video-Empowered Intelligent Transportation [79.18450119567315]
Adverse weather conditions pose severe challenges for video-based transportation surveillance.
We propose a dual attention and dual frequency-guided dehazing network (termed DADFNet) for real-time visibility enhancement.
arXiv Detail & Related papers (2023-04-19T11:55:30Z) - Semantic-aware Texture-Structure Feature Collaboration for Underwater
Image Enhancement [58.075720488942125]
Underwater image enhancement has become an attractive topic as a significant technology in marine engineering and aquatic robotics.
We develop an efficient and compact enhancement network in collaboration with a high-level semantic-aware pretrained model.
We also apply the proposed algorithm to the underwater salient object detection task to reveal the favorable semantic-aware ability for high-level vision tasks.
arXiv Detail & Related papers (2022-11-19T07:50:34Z) - GDIP: Gated Differentiable Image Processing for Object-Detection in
Adverse Conditions [15.327704761260131]
We present a Gated Differentiable Image Processing (GDIP) block, a domain-agnostic network architecture.
Our proposed GDIP block learns to enhance images directly through the downstream object detection loss.
We demonstrate significant improvement in detection performance over several state-of-the-art methods.
arXiv Detail & Related papers (2022-09-29T16:43:13Z) - Lightweight HDR Camera ISP for Robust Perception in Dynamic Illumination
Conditions via Fourier Adversarial Networks [35.532434169432776]
We propose a lightweight two-stage image enhancement algorithm sequentially balancing illumination and noise removal.
We also propose a Fourier spectrum-based adversarial framework (AFNet) for consistent image enhancement under varying illumination conditions.
Based on quantitative and qualitative evaluations, we also examine the practicality and effects of image enhancement techniques on the performance of common perception tasks.
arXiv Detail & Related papers (2022-04-04T18:48:51Z) - Learning Deep Context-Sensitive Decomposition for Low-Light Image
Enhancement [58.72667941107544]
A typical framework is to simultaneously estimate the illumination and reflectance, but they disregard the scene-level contextual information encapsulated in feature spaces.
We develop a new context-sensitive decomposition network architecture to exploit the scene-level contextual dependencies on spatial scales.
We develop a lightweight CSDNet (named LiteCSDNet) by reducing the number of channels.
arXiv Detail & Related papers (2021-12-09T06:25:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.