DarkVisionNet: Low-Light Imaging via RGB-NIR Fusion with Deep
Inconsistency Prior
- URL: http://arxiv.org/abs/2303.06834v2
- Date: Wed, 19 Apr 2023 06:25:06 GMT
- Title: DarkVisionNet: Low-Light Imaging via RGB-NIR Fusion with Deep
Inconsistency Prior
- Authors: Shuangping Jin, Bingbing Yu, Minhao Jing, Yi Zhou, Jiajun Liang, Renhe
Ji
- Abstract summary: High-intensity noise in low-light images amplifies the effect of structure inconsistency between RGB-NIR images, which fails existing algorithms.
We propose a new RGB-NIR fusion algorithm called Dark Vision Net (DVN) with two technical novelties: Deep Structure and Deep Inconsistency Prior (DIP)
Based on the deep structures from both RGB and NIR domains, we introduce the DIP to leverage the structure inconsistency to guide the fusion of RGB-NIR.
- Score: 6.162654963520402
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: RGB-NIR fusion is a promising method for low-light imaging. However,
high-intensity noise in low-light images amplifies the effect of structure
inconsistency between RGB-NIR images, which fails existing algorithms. To
handle this, we propose a new RGB-NIR fusion algorithm called Dark Vision Net
(DVN) with two technical novelties: Deep Structure and Deep Inconsistency Prior
(DIP). The Deep Structure extracts clear structure details in deep multiscale
feature space rather than raw input space, which is more robust to noisy
inputs. Based on the deep structures from both RGB and NIR domains, we
introduce the DIP to leverage the structure inconsistency to guide the fusion
of RGB-NIR. Benefiting from this, the proposed DVN obtains high-quality
lowlight images without the visual artifacts. We also propose a new dataset
called Dark Vision Dataset (DVD), consisting of aligned RGB-NIR image pairs, as
the first public RGBNIR fusion benchmark. Quantitative and qualitative results
on the proposed benchmark show that DVN significantly outperforms other
comparison algorithms in PSNR and SSIM, especially in extremely low light
conditions.
Related papers
- NIR-Assisted Image Denoising: A Selective Fusion Approach and A Real-World Benchmark Dataset [53.79524776100983]
Leveraging near-infrared (NIR) images to assist visible RGB image denoising shows the potential to address this issue.
Existing works still struggle with taking advantage of NIR information effectively for real-world image denoising.
We propose an efficient Selective Fusion Module (SFM), which can be plug-and-played into the advanced denoising networks.
arXiv Detail & Related papers (2024-04-12T14:54:26Z) - You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - AGG-Net: Attention Guided Gated-convolutional Network for Depth Image
Completion [1.8820731605557168]
We propose a new model for depth image completion based on the Attention Guided Gated-convolutional Network (AGG-Net)
In the encoding stage, an Attention Guided Gated-Convolution (AG-GConv) module is proposed to realize the fusion of depth and color features at different scales.
In the decoding stage, an Attention Guided Skip Connection (AG-SC) module is presented to avoid introducing too many depth-irrelevant features to the reconstruction.
arXiv Detail & Related papers (2023-09-04T14:16:08Z) - Attentive Multimodal Fusion for Optical and Scene Flow [24.08052492109655]
Existing methods typically rely solely on RGB images or fuse the modalities at later stages.
We propose a novel deep neural network approach named FusionRAFT, which enables early-stage information fusion between sensor modalities.
Our approach exhibits improved robustness in the presence of noise and low-lighting conditions that affect the RGB images.
arXiv Detail & Related papers (2023-07-28T04:36:07Z) - Enhancing Low-Light Images Using Infrared-Encoded Images [81.8710581927427]
Previous arts mainly focus on the low-light images captured in the visible spectrum using pixel-wise loss.
We propose a novel approach to increase the visibility of images captured under low-light environments by removing the in-camera infrared (IR) cut-off filter.
arXiv Detail & Related papers (2023-07-09T08:29:19Z) - Symmetric Uncertainty-Aware Feature Transmission for Depth
Super-Resolution [52.582632746409665]
We propose a novel Symmetric Uncertainty-aware Feature Transmission (SUFT) for color-guided DSR.
Our method achieves superior performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-06-01T06:35:59Z) - Visibility Constrained Wide-band Illumination Spectrum Design for
Seeing-in-the-Dark [38.11468156313255]
Seeing-in-the-dark is one of the most important and challenging computer vision tasks.
In this paper, we try to robustify NIR2RGB translation by designing the optimal spectrum of auxiliary illumination in the wide-band VIS-NIR range.
arXiv Detail & Related papers (2023-03-21T07:27:37Z) - Near-Infrared Depth-Independent Image Dehazing using Haar Wavelets [13.561695463316031]
We propose a fusion algorithm for haze removal that combines color information from an RGB image and edge information extracted from its corresponding NIR image using Haar wavelets.
The proposed algorithm is based on the key observation that NIR edge features are more prominent in the hazy regions of the image than the RGB edge features in those same regions.
arXiv Detail & Related papers (2022-03-26T14:07:31Z) - Boosting RGB-D Saliency Detection by Leveraging Unlabeled RGB Images [89.81919625224103]
Training deep models for RGB-D salient object detection (SOD) often requires a large number of labeled RGB-D images.
We present a Dual-Semi RGB-D Salient Object Detection Network (DS-Net) to leverage unlabeled RGB images for boosting RGB-D saliency detection.
arXiv Detail & Related papers (2022-01-01T03:02:27Z) - Data-Level Recombination and Lightweight Fusion Scheme for RGB-D Salient
Object Detection [73.31632581915201]
We propose a novel data-level recombination strategy to fuse RGB with D (depth) before deep feature extraction.
A newly lightweight designed triple-stream network is applied over these novel formulated data to achieve an optimal channel-wise complementary fusion status between the RGB and D.
arXiv Detail & Related papers (2020-08-07T10:13:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.