Visible and NIR Image Fusion Algorithm Based on Information
Complementarity
- URL: http://arxiv.org/abs/2309.10522v1
- Date: Tue, 19 Sep 2023 11:07:24 GMT
- Title: Visible and NIR Image Fusion Algorithm Based on Information
Complementarity
- Authors: Zhuo Li, Bo Li
- Abstract summary: Currently visible and NIR fusion algorithms cannot take advantage of spectrum properties, as well as lack information complementarity.
This paper designs a complementary fusion model from the level of physical signals.
The proposed algorithm can not only well take advantage of the spectrum properties and the information complementarity, but also avoid color unnatural while maintaining naturalness.
- Score: 10.681833882330508
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Visible and near-infrared(NIR) band sensors provide images that capture
complementary spectral radiations from a scene. And the fusion of the visible
and NIR image aims at utilizing their spectrum properties to enhance image
quality. However, currently visible and NIR fusion algorithms cannot well take
advantage of spectrum properties, as well as lack information complementarity,
which results in color distortion and artifacts. Therefore, this paper designs
a complementary fusion model from the level of physical signals. First, in
order to distinguish between noise and useful information, we use two layers of
the weight-guided filter and guided filter to obtain texture and edge layers,
respectively. Second, to generate the initial visible-NIR complementarity
weight map, the difference maps of visible and NIR are filtered by the
extend-DoG filter. After that, the significant region of NIR night-time
compensation guides the initial complementarity weight map by the arctanI
function. Finally, the fusion images can be generated by the complementarity
weight maps of visible and NIR images, respectively. The experimental results
demonstrate that the proposed algorithm can not only well take advantage of the
spectrum properties and the information complementarity, but also avoid color
unnatural while maintaining naturalness, which outperforms the
state-of-the-art.
Related papers
- NIR-Assisted Image Denoising: A Selective Fusion Approach and A Real-World Benchmark Dataset [53.79524776100983]
Leveraging near-infrared (NIR) images to assist visible RGB image denoising shows the potential to address this issue.
Existing works still struggle with taking advantage of NIR information effectively for real-world image denoising.
We propose an efficient Selective Fusion Module (SFM), which can be plug-and-played into the advanced denoising networks.
arXiv Detail & Related papers (2024-04-12T14:54:26Z) - Beyond Night Visibility: Adaptive Multi-Scale Fusion of Infrared and
Visible Images [49.75771095302775]
We propose an Adaptive Multi-scale Fusion network (AMFusion) with infrared and visible images.
First, we separately fuse spatial and semantic features from infrared and visible images, where the former are used for the adjustment of light distribution.
Second, we utilize detection features extracted by a pre-trained backbone that guide the fusion of semantic features.
Third, we propose a new illumination loss to constrain fusion image with normal light intensity.
arXiv Detail & Related papers (2024-03-02T03:52:07Z) - Hypergraph-Guided Disentangled Spectrum Transformer Networks for
Near-Infrared Facial Expression Recognition [31.783671943393344]
We give the first attempt to deep NIR facial expression recognition and proposed a novel method called near-infrared facial expression transformer (NFER-Former)
NFER-Former disentangles the expression information and spectrum information from the input image, so that the expression features can be extracted without the interference of spectrum variation.
We have constructed a large NIR-VIS Facial Expression dataset that includes 360 subjects to better validate the efficiency of NFER-Former.
arXiv Detail & Related papers (2023-12-10T15:15:50Z) - A Multi-scale Information Integration Framework for Infrared and Visible
Image Fusion [50.84746752058516]
Infrared and visible image fusion aims at generating a fused image containing intensity and detail information of source images.
Existing methods mostly adopt a simple weight in the loss function to decide the information retention of each modality.
We propose a multi-scale dual attention (MDA) framework for infrared and visible image fusion.
arXiv Detail & Related papers (2023-12-07T14:40:05Z) - Enhancing Low-Light Images Using Infrared-Encoded Images [81.8710581927427]
Previous arts mainly focus on the low-light images captured in the visible spectrum using pixel-wise loss.
We propose a novel approach to increase the visibility of images captured under low-light environments by removing the in-camera infrared (IR) cut-off filter.
arXiv Detail & Related papers (2023-07-09T08:29:19Z) - Visible and infrared self-supervised fusion trained on a single example [1.1188842018827656]
Multispectral imaging is important task of image processing and computer vision.
Problem of visible (RGB) to Near Infrared (NIR) image fusion has become particularly timely.
Proposed approach fuses these two channels by training a Convolutional Neural Network by Self Supervised Learning (SSL) on a single example.
Experiments demonstrate that the proposed approach achieves similar or better qualitative and quantitative multispectral fusion results.
arXiv Detail & Related papers (2023-07-09T05:25:46Z) - CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature
Ensemble for Multi-modality Image Fusion [72.8898811120795]
We propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion.
Our method achieves state-of-the-art (SOTA) performance under both subjective and objective evaluation.
arXiv Detail & Related papers (2022-11-20T12:02:07Z) - Visible and Near Infrared Image Fusion Based on Texture Information [4.718295968108302]
A novel visible and near-infrared fusion method based on texture information is proposed to enhance unstructured environmental images.
It aims at the problems of artifact, information loss and noise in traditional visible and near infrared image fusion methods.
The experimental results demonstrate that the proposed algorithm can preserve the spectral characteristics and the unique information of visible and near-infrared images.
arXiv Detail & Related papers (2022-07-22T09:02:17Z) - Near-Infrared Depth-Independent Image Dehazing using Haar Wavelets [13.561695463316031]
We propose a fusion algorithm for haze removal that combines color information from an RGB image and edge information extracted from its corresponding NIR image using Haar wavelets.
The proposed algorithm is based on the key observation that NIR edge features are more prominent in the hazy regions of the image than the RGB edge features in those same regions.
arXiv Detail & Related papers (2022-03-26T14:07:31Z) - Generation of the NIR spectral Band for Satellite Images with
Convolutional Neural Networks [0.0]
Deep neural networks allow generating artificial spectral information, such as for the image colorization problem.
We study the generative adversarial network (GAN) approach in the task of the NIR band generation using just RGB channels of high-resolution satellite imagery.
arXiv Detail & Related papers (2021-06-13T15:14:57Z) - Cross-Spectral Periocular Recognition with Conditional Adversarial
Networks [59.17685450892182]
We propose Conditional Generative Adversarial Networks, trained to con-vert periocular images between visible and near-infrared spectra.
We obtain a cross-spectral periocular performance of EER=1%, and GAR>99% @ FAR=1%, which is comparable to the state-of-the-art with the PolyU database.
arXiv Detail & Related papers (2020-08-26T15:02:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.