Implicit Neural Feature Fusion Function for Multispectral and
Hyperspectral Image Fusion
- URL: http://arxiv.org/abs/2307.07288v2
- Date: Sun, 29 Oct 2023 14:48:41 GMT
- Title: Implicit Neural Feature Fusion Function for Multispectral and
Hyperspectral Image Fusion
- Authors: ShangQi Deng, RuoCheng Wu, Liang-Jian Deng, Ran Ran, Gemine Vivone
- Abstract summary: Multispectral and Hyperspectral Image Fusion (MHIF) is a practical task that aims to fuse a high-resolution multispectral image (HR-MSI) and a low-resolution hyperspectral image (LR-HSI) of the same scene to obtain a high-resolution hyperspectral image (HR-HSI)
- Score: 12.43436096160316
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multispectral and Hyperspectral Image Fusion (MHIF) is a practical task that
aims to fuse a high-resolution multispectral image (HR-MSI) and a
low-resolution hyperspectral image (LR-HSI) of the same scene to obtain a
high-resolution hyperspectral image (HR-HSI). Benefiting from powerful
inductive bias capability, CNN-based methods have achieved great success in the
MHIF task. However, they lack certain interpretability and require convolution
structures be stacked to enhance performance. Recently, Implicit Neural
Representation (INR) has achieved good performance and interpretability in 2D
tasks due to its ability to locally interpolate samples and utilize multimodal
content such as pixels and coordinates. Although INR-based approaches show
promise, they require extra construction of high-frequency information
(\emph{e.g.,} positional encoding). In this paper, inspired by previous work of
MHIF task, we realize that HR-MSI could serve as a high-frequency detail
auxiliary input, leading us to propose a novel INR-based hyperspectral fusion
function named Implicit Neural Feature Fusion Function (INF). As an elaborate
structure, it solves the MHIF task and addresses deficiencies in the INR-based
approaches. Specifically, our INF designs a Dual High-Frequency Fusion (DHFF)
structure that obtains high-frequency information twice from HR-MSI and LR-HSI,
then subtly fuses them with coordinate information. Moreover, the proposed INF
incorporates a parameter-free method named INR with cosine similarity (INR-CS)
that uses cosine similarity to generate local weights through feature vectors.
Based on INF, we construct an Implicit Neural Fusion Network (INFN) that
achieves state-of-the-art performance for MHIF tasks of two public datasets,
\emph{i.e.,} CAVE and Harvard. The code will soon be made available on GitHub.
Related papers
- Single-Layer Learnable Activation for Implicit Neural Representation (SL$^{2}$A-INR) [6.572456394600755]
Implicit Representation (INR) leveraging a neural network to transform coordinate input into corresponding attributes has driven significant advances in vision-related domains.
We propose SL$2$A-INR with a single-layer learnable activation function, prompting the effectiveness of traditional ReLU-baseds.
Our method performs superior across diverse tasks, including image representation, 3D shape reconstruction, single image super-resolution, CT reconstruction, and novel view.
arXiv Detail & Related papers (2024-09-17T02:02:15Z) - CSAKD: Knowledge Distillation with Cross Self-Attention for Hyperspectral and Multispectral Image Fusion [9.3350274016294]
This paper introduces a novel knowledge distillation (KD) framework for HR-MSI/LR-HSI fusion to achieve SR of LR-HSI.
To fully exploit the spatial and spectral feature representations of LR-HSI and HR-MSI, we propose a novel Cross Self-Attention (CSA) fusion module.
Our experimental results demonstrate that the student model achieves comparable or superior LR-HSI SR performance.
arXiv Detail & Related papers (2024-06-28T05:25:57Z) - Fourier-enhanced Implicit Neural Fusion Network for Multispectral and Hyperspectral Image Fusion [12.935592400092712]
Implicit neural representations (INR) have made significant strides in various vision-related domains.
INR is prone to losing high-frequency information and is confined to the lack of global perceptual capabilities.
This paper introduces a Fourier-enhanced Implicit Neural Fusion Network (FeINFN) specifically designed for MHIF task.
arXiv Detail & Related papers (2024-04-23T16:14:20Z) - Hybrid Convolutional and Attention Network for Hyperspectral Image Denoising [54.110544509099526]
Hyperspectral image (HSI) denoising is critical for the effective analysis and interpretation of hyperspectral data.
We propose a hybrid convolution and attention network (HCANet) to enhance HSI denoising.
Experimental results on mainstream HSI datasets demonstrate the rationality and effectiveness of the proposed HCANet.
arXiv Detail & Related papers (2024-03-15T07:18:43Z) - ADASR: An Adversarial Auto-Augmentation Framework for Hyperspectral and
Multispectral Data Fusion [54.668445421149364]
Deep learning-based hyperspectral image (HSI) super-resolution aims to generate high spatial resolution HSI (HR-HSI) by fusing hyperspectral image (HSI) and multispectral image (MSI) with deep neural networks (DNNs)
In this letter, we propose a novel adversarial automatic data augmentation framework ADASR that automatically optimize and augments HSI-MSI sample pairs to enrich data diversity for HSI-MSI fusion.
arXiv Detail & Related papers (2023-10-11T07:30:37Z) - Mutual-Guided Dynamic Network for Image Fusion [51.615598671899335]
We propose a novel mutual-guided dynamic network (MGDN) for image fusion, which allows for effective information utilization across different locations and inputs.
Experimental results on five benchmark datasets demonstrate that our proposed method outperforms existing methods on four image fusion tasks.
arXiv Detail & Related papers (2023-08-24T03:50:37Z) - CDDFuse: Correlation-Driven Dual-Branch Feature Decomposition for
Multi-Modality Image Fusion [138.40422469153145]
We propose a novel Correlation-Driven feature Decomposition Fusion (CDDFuse) network.
We show that CDDFuse achieves promising results in multiple fusion tasks, including infrared-visible image fusion and medical image fusion.
arXiv Detail & Related papers (2022-11-26T02:40:28Z) - Asymmetric CNN for image super-resolution [102.96131810686231]
Deep convolutional neural networks (CNNs) have been widely applied for low-level vision over the past five years.
We propose an asymmetric CNN (ACNet) comprising an asymmetric block (AB), a mem?ory enhancement block (MEB) and a high-frequency feature enhancement block (HFFEB) for image super-resolution.
Our ACNet can effectively address single image super-resolution (SISR), blind SISR and blind SISR of blind noise problems.
arXiv Detail & Related papers (2021-03-25T07:10:46Z) - Coupled Convolutional Neural Network with Adaptive Response Function
Learning for Unsupervised Hyperspectral Super-Resolution [28.798775822331045]
Hyperspectral super-resolution refers to fusing HSI and MSI to generate an image with both high spatial and high spectral resolutions.
In this work, an unsupervised deep learning-based fusion method - HyCoNet - that can solve the problems in HSI-MSI fusion without the prior PSF and SRF information is proposed.
arXiv Detail & Related papers (2020-07-28T06:17:02Z) - Lightweight image super-resolution with enhanced CNN [82.36883027158308]
Deep convolutional neural networks (CNNs) with strong expressive ability have achieved impressive performances on single image super-resolution (SISR)
We propose a lightweight enhanced SR CNN (LESRCNN) with three successive sub-blocks, an information extraction and enhancement block (IEEB), a reconstruction block (RB) and an information refinement block (IRB)
IEEB extracts hierarchical low-resolution (LR) features and aggregates the obtained features step-by-step to increase the memory ability of the shallow layers on deep layers for SISR.
RB converts low-frequency features into high-frequency features by fusing global
arXiv Detail & Related papers (2020-07-08T18:03:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.