Bridge the Vision Gap from Field to Command: A Deep Learning Network
Enhancing Illumination and Details
- URL: http://arxiv.org/abs/2101.08039v1
- Date: Wed, 20 Jan 2021 09:39:57 GMT
- Title: Bridge the Vision Gap from Field to Command: A Deep Learning Network
Enhancing Illumination and Details
- Authors: Zhuqing Jiang, Chang Liu, Ya'nan Wang, Kai Li, Aidong Men, Haiying
Wang, Haiyong Luo
- Abstract summary: We propose a two-stream framework named NEID to tune up the brightness and enhance the details simultaneously.
The proposed method consists of three parts: Light Enhancement (LE), Detail Refinement (DR) and Feature Fusing (FF) module.
- Score: 17.25188250076639
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the goal of tuning up the brightness, low-light image enhancement enjoys
numerous applications, such as surveillance, remote sensing and computational
photography. Images captured under low-light conditions often suffer from poor
visibility and blur. Solely brightening the dark regions will inevitably
amplify the blur, thus may lead to detail loss. In this paper, we propose a
simple yet effective two-stream framework named NEID to tune up the brightness
and enhance the details simultaneously without introducing many computational
costs. Precisely, the proposed method consists of three parts: Light
Enhancement (LE), Detail Refinement (DR) and Feature Fusing (FF) module, which
can aggregate composite features oriented to multiple tasks based on channel
attention mechanism. Extensive experiments conducted on several benchmark
datasets demonstrate the efficacy of our method and its superiority over
state-of-the-art methods.
Related papers
- Light the Night: A Multi-Condition Diffusion Framework for Unpaired Low-Light Enhancement in Autonomous Driving [45.97279394690308]
LightDiff is a framework designed to enhance the low-light image quality for autonomous driving applications.
It incorporates a novel multi-condition adapter that adaptively controls the input weights from different modalities, including depth maps, RGB images, and text captions.
It can significantly improve the performance of several state-of-the-art 3D detectors in night-time conditions while achieving high visual quality scores.
arXiv Detail & Related papers (2024-04-07T04:10:06Z) - A Non-Uniform Low-Light Image Enhancement Method with Multi-Scale
Attention Transformer and Luminance Consistency Loss [11.585269110131659]
Low-light image enhancement aims to improve the perception of images collected in dim environments.
Existing methods cannot adaptively extract the differentiated luminance information, which will easily cause over-exposure and under-exposure.
We propose a multi-scale attention Transformer named MSATr, which sufficiently extracts local and global features for light balance to improve the visual quality.
arXiv Detail & Related papers (2023-12-27T10:07:11Z) - LDM-ISP: Enhancing Neural ISP for Low Light with Latent Diffusion Models [54.93010869546011]
We propose to leverage the pre-trained latent diffusion model to perform the neural ISP for enhancing extremely low-light images.
Specifically, to tailor the pre-trained latent diffusion model to operate on the RAW domain, we train a set of lightweight taming modules.
We observe different roles of UNet denoising and decoder reconstruction in the latent diffusion model, which inspires us to decompose the low-light image enhancement task into latent-space low-frequency content generation and decoding-phase high-frequency detail maintenance.
arXiv Detail & Related papers (2023-12-02T04:31:51Z) - Diving into Darkness: A Dual-Modulated Framework for High-Fidelity
Super-Resolution in Ultra-Dark Environments [51.58771256128329]
This paper proposes a specialized dual-modulated learning framework that attempts to deeply dissect the nature of the low-light super-resolution task.
We develop Illuminance-Semantic Dual Modulation (ISDM) components to enhance feature-level preservation of illumination and color details.
Comprehensive experiments showcases the applicability and generalizability of our approach to diverse and challenging ultra-low-light conditions.
arXiv Detail & Related papers (2023-09-11T06:55:32Z) - CDAN: Convolutional dense attention-guided network for low-light image enhancement [2.2530496464901106]
Low-light images pose challenges of diminished clarity, muted colors, and reduced details.
This paper introduces the Convolutional Dense Attention-guided Network (CDAN), a novel solution for enhancing low-light images.
CDAN integrates an autoencoder-based architecture with convolutional and dense blocks, complemented by an attention mechanism and skip connections.
arXiv Detail & Related papers (2023-08-24T16:22:05Z) - Low-Light Image Enhancement with Illumination-Aware Gamma Correction and
Complete Image Modelling Network [69.96295927854042]
Low-light environments usually lead to less informative large-scale dark areas.
We propose to integrate the effectiveness of gamma correction with the strong modelling capacities of deep networks.
Because exponential operation introduces high computational complexity, we propose to use Taylor Series to approximate gamma correction.
arXiv Detail & Related papers (2023-08-16T08:46:51Z) - Enhancing Low-Light Images Using Infrared-Encoded Images [81.8710581927427]
Previous arts mainly focus on the low-light images captured in the visible spectrum using pixel-wise loss.
We propose a novel approach to increase the visibility of images captured under low-light environments by removing the in-camera infrared (IR) cut-off filter.
arXiv Detail & Related papers (2023-07-09T08:29:19Z) - Learnable Differencing Center for Nighttime Depth Perception [39.455428679154934]
We propose a simple yet effective framework called LDCNet.
Our key idea is to use Recurrent Inter-Convolution Differencing (RICD) and Illumination-Affinitive Intra-Convolution Differencing (IAICD) to enhance the nighttime color images.
On both nighttime depth completion and depth estimation tasks, extensive experiments demonstrate the effectiveness of our LDCNet.
arXiv Detail & Related papers (2023-06-26T09:21:13Z) - Perceptual Multi-Exposure Fusion [0.5076419064097732]
This paper presents a perceptual multi-exposure fusion method that ensures fine shadow/highlight details but with lower complexity than detailenhanced methods.
We build a large-scale multiexposure benchmark dataset suitable for static scenes, which contains 167 image sequences.
Experiments on the constructed dataset demonstrate that the proposed method exceeds existing eight state-of-the-art approaches in terms of visually and MEF-SSIM value.
arXiv Detail & Related papers (2022-10-18T05:34:58Z) - Low-Light Video Enhancement with Synthetic Event Guidance [188.7256236851872]
We use synthetic events from multiple frames to guide the enhancement and restoration of low-light videos.
Our method outperforms existing low-light video or single image enhancement approaches on both synthetic and real LLVE datasets.
arXiv Detail & Related papers (2022-08-23T14:58:29Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.