Visible and Near Infrared Image Fusion Based on Texture Information
- URL: http://arxiv.org/abs/2207.10953v1
- Date: Fri, 22 Jul 2022 09:02:17 GMT
- Title: Visible and Near Infrared Image Fusion Based on Texture Information
- Authors: Guanyu Zhang, Beichen Sun, Yuehan Qi, Yang Liu
- Abstract summary: A novel visible and near-infrared fusion method based on texture information is proposed to enhance unstructured environmental images.
It aims at the problems of artifact, information loss and noise in traditional visible and near infrared image fusion methods.
The experimental results demonstrate that the proposed algorithm can preserve the spectral characteristics and the unique information of visible and near-infrared images.
- Score: 4.718295968108302
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-sensor fusion is widely used in the environment perception system of
the autonomous vehicle. It solves the interference caused by environmental
changes and makes the whole driving system safer and more reliable. In this
paper, a novel visible and near-infrared fusion method based on texture
information is proposed to enhance unstructured environmental images. It aims
at the problems of artifact, information loss and noise in traditional visible
and near infrared image fusion methods. Firstly, the structure information of
the visible image (RGB) and the near infrared image (NIR) after texture removal
is obtained by relative total variation (RTV) calculation as the base layer of
the fused image; secondly, a Bayesian classification model is established to
calculate the noise weight and the noise information and the noise information
in the visible image is adaptively filtered by joint bilateral filter; finally,
the fused image is acquired by color space conversion. The experimental results
demonstrate that the proposed algorithm can preserve the spectral
characteristics and the unique information of visible and near-infrared images
without artifacts and color distortion, and has good robustness as well as
preserving the unique texture.
Related papers
- NIR-Assisted Image Denoising: A Selective Fusion Approach and A Real-World Benchmark Dataset [53.79524776100983]
Leveraging near-infrared (NIR) images to assist visible RGB image denoising shows the potential to address this issue.
Existing works still struggle with taking advantage of NIR information effectively for real-world image denoising.
We propose an efficient Selective Fusion Module (SFM), which can be plug-and-played into the advanced denoising networks.
arXiv Detail & Related papers (2024-04-12T14:54:26Z) - Beyond Night Visibility: Adaptive Multi-Scale Fusion of Infrared and
Visible Images [49.75771095302775]
We propose an Adaptive Multi-scale Fusion network (AMFusion) with infrared and visible images.
First, we separately fuse spatial and semantic features from infrared and visible images, where the former are used for the adjustment of light distribution.
Second, we utilize detection features extracted by a pre-trained backbone that guide the fusion of semantic features.
Third, we propose a new illumination loss to constrain fusion image with normal light intensity.
arXiv Detail & Related papers (2024-03-02T03:52:07Z) - Decomposition-based and Interference Perception for Infrared and Visible
Image Fusion in Complex Scenes [4.919706769234434]
We propose a decomposition-based and interference perception image fusion method.
We classify the pixels of visible image from the degree of scattering of light transmission, based on which we then separate the detail and energy information of the image.
This refined decomposition facilitates the proposed model in identifying more interfering pixels that are in complex scenes.
arXiv Detail & Related papers (2024-02-03T09:27:33Z) - A Multi-scale Information Integration Framework for Infrared and Visible
Image Fusion [50.84746752058516]
Infrared and visible image fusion aims at generating a fused image containing intensity and detail information of source images.
Existing methods mostly adopt a simple weight in the loss function to decide the information retention of each modality.
We propose a multi-scale dual attention (MDA) framework for infrared and visible image fusion.
arXiv Detail & Related papers (2023-12-07T14:40:05Z) - Visible and NIR Image Fusion Algorithm Based on Information
Complementarity [10.681833882330508]
Currently visible and NIR fusion algorithms cannot take advantage of spectrum properties, as well as lack information complementarity.
This paper designs a complementary fusion model from the level of physical signals.
The proposed algorithm can not only well take advantage of the spectrum properties and the information complementarity, but also avoid color unnatural while maintaining naturalness.
arXiv Detail & Related papers (2023-09-19T11:07:24Z) - CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature
Ensemble for Multi-modality Image Fusion [72.8898811120795]
We propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion.
Our method achieves state-of-the-art (SOTA) performance under both subjective and objective evaluation.
arXiv Detail & Related papers (2022-11-20T12:02:07Z) - Unsupervised Misaligned Infrared and Visible Image Fusion via
Cross-Modality Image Generation and Registration [59.02821429555375]
We present a robust cross-modality generation-registration paradigm for unsupervised misaligned infrared and visible image fusion.
To better fuse the registered infrared images and visible images, we present a feature Interaction Fusion Module (IFM)
arXiv Detail & Related papers (2022-05-24T07:51:57Z) - Target-aware Dual Adversarial Learning and a Multi-scenario
Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection [65.30079184700755]
This study addresses the issue of fusing infrared and visible images that appear differently for object detection.
Previous approaches discover commons underlying the two modalities and fuse upon the common space either by iterative optimization or deep networks.
This paper proposes a bilevel optimization formulation for the joint problem of fusion and detection, and then unrolls to a target-aware Dual Adversarial Learning (TarDAL) network for fusion and a commonly used detection network.
arXiv Detail & Related papers (2022-03-30T11:44:56Z) - When Image Decomposition Meets Deep Learning: A Novel Infrared and
Visible Image Fusion Method [27.507158159317417]
Infrared and visible image fusion is a hot topic in image processing and image enhancement.
We propose a novel dual-stream auto-encoder based fusion network.
arXiv Detail & Related papers (2020-09-02T19:32:28Z) - Bayesian Fusion for Infrared and Visible Images [26.64101343489016]
In this paper, a novel Bayesian fusion model is established for infrared and visible images.
We aim at making the fused image satisfy human visual system.
Compared with the previous methods, the novel model can generate better fused images with high-light targets and rich texture details.
arXiv Detail & Related papers (2020-05-12T14:57:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.