Multispectral image fusion by super pixel statistics
- URL: http://arxiv.org/abs/2112.11329v1
- Date: Tue, 21 Dec 2021 16:19:10 GMT
- Title: Multispectral image fusion by super pixel statistics
- Authors: Nati Ofir
- Abstract summary: I address the task of visible color RGB to Near-Infrared (NIR) fusion.
The RGB image captures the color of the scene while the NIR captures details and sees beyond haze and clouds.
The proposed method is designed to produce a fusion that contains both advantages of each spectra.
- Score: 1.4685355149711299
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multispectral image fusion is a fundamental problem of remote sensing and
image processing. This problem is addressed by both classic and deep learning
approaches. This paper is focused on the classic solutions and introduces a new
novel approach to this family. The proposed method carries out multispectral
image fusion based on the content of the fused images. It relies on analysis
based on the level of information on segmented superpixels in the fused inputs.
Specifically, I address the task of visible color RGB to Near-Infrared (NIR)
fusion. The RGB image captures the color of the scene while the NIR captures
details and sees beyond haze and clouds. Since each channel senses different
information of the scene, their fusion is challenging and interesting. The
proposed method is designed to produce a fusion that contains both advantages
of each spectra. This manuscript experiments show that the proposed method is
visually informative with respect to other classic fusion methods which can be
run fastly on embedded devices with no need for heavy computation resources.
Related papers
- Visible and infrared self-supervised fusion trained on a single example [1.1188842018827656]
Multispectral imaging is important task of image processing and computer vision.
Problem of visible (RGB) to Near Infrared (NIR) image fusion has become particularly timely.
Proposed approach fuses these two channels by training a Convolutional Neural Network by Self Supervised Learning (SSL) on a single example.
Experiments demonstrate that the proposed approach achieves similar or better qualitative and quantitative multispectral fusion results.
arXiv Detail & Related papers (2023-07-09T05:25:46Z) - DePF: A Novel Fusion Approach based on Decomposition Pooling for
Infrared and Visible Images [7.11574718614606]
A novel fusion network based on the decomposition pooling (de-pooling) manner is proposed, termed as DePF.
A de-pooling based encoder is designed to extract multi-scale image and detail features of source images at the same time.
The experimental results demonstrate that the proposed method exhibits superior fusion performance over the state-of-the-arts.
arXiv Detail & Related papers (2023-05-27T05:47:14Z) - LRRNet: A Novel Representation Learning Guided Fusion Network for
Infrared and Visible Images [98.36300655482196]
We formulate the fusion task mathematically, and establish a connection between its optimal solution and the network architecture that can implement it.
In particular we adopt a learnable representation approach to the fusion task, in which the construction of the fusion network architecture is guided by the optimisation algorithm producing the learnable model.
Based on this novel network architecture, an end-to-end lightweight fusion network is constructed to fuse infrared and visible light images.
arXiv Detail & Related papers (2023-04-11T12:11:23Z) - Dif-Fusion: Towards High Color Fidelity in Infrared and Visible Image
Fusion with Diffusion Models [54.952979335638204]
We propose a novel method with diffusion models, termed as Dif-Fusion, to generate the distribution of the multi-channel input data.
Our method is more effective than other state-of-the-art image fusion methods, especially in color fidelity.
arXiv Detail & Related papers (2023-01-19T13:37:19Z) - CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature
Ensemble for Multi-modality Image Fusion [72.8898811120795]
We propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion.
Our method achieves state-of-the-art (SOTA) performance under both subjective and objective evaluation.
arXiv Detail & Related papers (2022-11-20T12:02:07Z) - Target-aware Dual Adversarial Learning and a Multi-scenario
Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection [65.30079184700755]
This study addresses the issue of fusing infrared and visible images that appear differently for object detection.
Previous approaches discover commons underlying the two modalities and fuse upon the common space either by iterative optimization or deep networks.
This paper proposes a bilevel optimization formulation for the joint problem of fusion and detection, and then unrolls to a target-aware Dual Adversarial Learning (TarDAL) network for fusion and a commonly used detection network.
arXiv Detail & Related papers (2022-03-30T11:44:56Z) - Near-Infrared Depth-Independent Image Dehazing using Haar Wavelets [13.561695463316031]
We propose a fusion algorithm for haze removal that combines color information from an RGB image and edge information extracted from its corresponding NIR image using Haar wavelets.
The proposed algorithm is based on the key observation that NIR edge features are more prominent in the hazy regions of the image than the RGB edge features in those same regions.
arXiv Detail & Related papers (2022-03-26T14:07:31Z) - Image Fusion Transformer [75.71025138448287]
In image fusion, images obtained from different sensors are fused to generate a single image with enhanced information.
In recent years, state-of-the-art methods have adopted Convolution Neural Networks (CNNs) to encode meaningful features for image fusion.
We propose a novel Image Fusion Transformer (IFT) where we develop a transformer-based multi-scale fusion strategy.
arXiv Detail & Related papers (2021-07-19T16:42:49Z) - A Deep Decomposition Network for Image Processing: A Case Study for
Visible and Infrared Image Fusion [38.17268441062239]
We propose a new image decomposition method based on convolutional neural network.
We input infrared image and visible light image and decompose them into three high-frequency feature images and a low-frequency feature image respectively.
The two sets of feature images are fused using a specific fusion strategy to obtain fusion feature images.
arXiv Detail & Related papers (2021-02-21T06:34:33Z) - Deep Burst Super-Resolution [165.90445859851448]
We propose a novel architecture for the burst super-resolution task.
Our network takes multiple noisy RAW images as input, and generates a denoised, super-resolved RGB image as output.
In order to enable training and evaluation on real-world data, we additionally introduce the BurstSR dataset.
arXiv Detail & Related papers (2021-01-26T18:57:21Z) - NestFuse: An Infrared and Visible Image Fusion Architecture based on
Nest Connection and Spatial/Channel Attention Models [12.16870022547833]
We propose a novel method for infrared and visible image fusion.
We develop nest connection-based network and spatial/channel attention models.
Experiments are performed on publicly available datasets.
arXiv Detail & Related papers (2020-07-01T08:46:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.