Visibility Constrained Wide-band Illumination Spectrum Design for
Seeing-in-the-Dark
- URL: http://arxiv.org/abs/2303.11642v1
- Date: Tue, 21 Mar 2023 07:27:37 GMT
- Title: Visibility Constrained Wide-band Illumination Spectrum Design for
Seeing-in-the-Dark
- Authors: Muyao Niu, Zhuoxiao Li, Zhihang Zhong, Yinqiang Zheng
- Abstract summary: Seeing-in-the-dark is one of the most important and challenging computer vision tasks.
In this paper, we try to robustify NIR2RGB translation by designing the optimal spectrum of auxiliary illumination in the wide-band VIS-NIR range.
- Score: 38.11468156313255
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Seeing-in-the-dark is one of the most important and challenging computer
vision tasks due to its wide applications and extreme complexities of
in-the-wild scenarios. Existing arts can be mainly divided into two threads: 1)
RGB-dependent methods restore information using degraded RGB inputs only (\eg,
low-light enhancement), 2) RGB-independent methods translate images captured
under auxiliary near-infrared (NIR) illuminants into RGB domain (\eg, NIR2RGB
translation). The latter is very attractive since it works in complete darkness
and the illuminants are visually friendly to naked eyes, but tends to be
unstable due to its intrinsic ambiguities. In this paper, we try to robustify
NIR2RGB translation by designing the optimal spectrum of auxiliary illumination
in the wide-band VIS-NIR range, while keeping visual friendliness. Our core
idea is to quantify the visibility constraint implied by the human vision
system and incorporate it into the design pipeline. By modeling the formation
process of images in the VIS-NIR range, the optimal multiplexing of a wide
range of LEDs is automatically designed in a fully differentiable manner,
within the feasible region defined by the visibility constraint. We also
collect a substantially expanded VIS-NIR hyperspectral image dataset for
experiments by using a customized 50-band filter wheel. Experimental results
show that the task can be significantly improved by using the optimized
wide-band illumination than using NIR only. Codes Available:
https://github.com/MyNiuuu/VCSD.
Related papers
- NIR-Assisted Image Denoising: A Selective Fusion Approach and A Real-World Benchmark Dataset [53.79524776100983]
Leveraging near-infrared (NIR) images to assist visible RGB image denoising shows the potential to address this issue.
Existing works still struggle with taking advantage of NIR information effectively for real-world image denoising.
We propose an efficient Selective Fusion Module (SFM), which can be plug-and-played into the advanced denoising networks.
arXiv Detail & Related papers (2024-04-12T14:54:26Z) - You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - Hypergraph-Guided Disentangled Spectrum Transformer Networks for
Near-Infrared Facial Expression Recognition [31.783671943393344]
We give the first attempt to deep NIR facial expression recognition and proposed a novel method called near-infrared facial expression transformer (NFER-Former)
NFER-Former disentangles the expression information and spectrum information from the input image, so that the expression features can be extracted without the interference of spectrum variation.
We have constructed a large NIR-VIS Facial Expression dataset that includes 360 subjects to better validate the efficiency of NFER-Former.
arXiv Detail & Related papers (2023-12-10T15:15:50Z) - Diving into Darkness: A Dual-Modulated Framework for High-Fidelity
Super-Resolution in Ultra-Dark Environments [51.58771256128329]
This paper proposes a specialized dual-modulated learning framework that attempts to deeply dissect the nature of the low-light super-resolution task.
We develop Illuminance-Semantic Dual Modulation (ISDM) components to enhance feature-level preservation of illumination and color details.
Comprehensive experiments showcases the applicability and generalizability of our approach to diverse and challenging ultra-low-light conditions.
arXiv Detail & Related papers (2023-09-11T06:55:32Z) - AGG-Net: Attention Guided Gated-convolutional Network for Depth Image
Completion [1.8820731605557168]
We propose a new model for depth image completion based on the Attention Guided Gated-convolutional Network (AGG-Net)
In the encoding stage, an Attention Guided Gated-Convolution (AG-GConv) module is proposed to realize the fusion of depth and color features at different scales.
In the decoding stage, an Attention Guided Skip Connection (AG-SC) module is presented to avoid introducing too many depth-irrelevant features to the reconstruction.
arXiv Detail & Related papers (2023-09-04T14:16:08Z) - Enhancing Low-Light Images Using Infrared-Encoded Images [81.8710581927427]
Previous arts mainly focus on the low-light images captured in the visible spectrum using pixel-wise loss.
We propose a novel approach to increase the visibility of images captured under low-light environments by removing the in-camera infrared (IR) cut-off filter.
arXiv Detail & Related papers (2023-07-09T08:29:19Z) - DarkVisionNet: Low-Light Imaging via RGB-NIR Fusion with Deep
Inconsistency Prior [6.162654963520402]
High-intensity noise in low-light images amplifies the effect of structure inconsistency between RGB-NIR images, which fails existing algorithms.
We propose a new RGB-NIR fusion algorithm called Dark Vision Net (DVN) with two technical novelties: Deep Structure and Deep Inconsistency Prior (DIP)
Based on the deep structures from both RGB and NIR domains, we introduce the DIP to leverage the structure inconsistency to guide the fusion of RGB-NIR.
arXiv Detail & Related papers (2023-03-13T03:31:29Z) - Unsupervised Visible-light Images Guided Cross-Spectrum Depth Estimation
from Dual-Modality Cameras [33.77748026254935]
Cross-spectrum depth estimation aims to provide a depth map in all illumination conditions with a pair of dual-spectrum images.
In this paper, we propose an unsupervised visible-light image guided cross-spectrum (i.e., thermal and visible-light, TIR-VIS in short) depth estimation framework.
Our method achieves better performance than the compared existing methods.
arXiv Detail & Related papers (2022-04-30T12:58:35Z) - An Integrated Enhancement Solution for 24-hour Colorful Imaging [51.782600936647235]
Current industry practice for 24-hour outdoor imaging is to use a silicon camera supplemented with near-infrared (NIR) illumination.
This will result in color images with poor contrast at daytime and absence of chrominance at nighttime.
We propose a novel and integrated enhancement solution that produces clear color images, whether at abundant sunlight daytime or extremely low-light nighttime.
arXiv Detail & Related papers (2020-05-10T05:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.