A Triple-Double Convolutional Neural Network for Panchromatic Sharpening
- URL: http://arxiv.org/abs/2112.02237v1
- Date: Sat, 4 Dec 2021 04:22:11 GMT
- Title: A Triple-Double Convolutional Neural Network for Panchromatic Sharpening
- Authors: Tian-Jing Zhang, Liang-Jian Deng, Ting-Zhu Huang, Jocelyn Chanussot,
Gemine Vivone
- Abstract summary: Pansharpening refers to the fusion of a panchromatic image with a high spatial resolution and a multispectral image with a low spatial resolution.
In this paper, we propose a novel deep neural network architecture with level-domain based loss function for pansharpening.
- Score: 31.392337484731783
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pansharpening refers to the fusion of a panchromatic image with a high
spatial resolution and a multispectral image with a low spatial resolution,
aiming to obtain a high spatial resolution multispectral image. In this paper,
we propose a novel deep neural network architecture with level-domain based
loss function for pansharpening by taking into account the following
double-type structures, \emph{i.e.,} double-level, double-branch, and
double-direction, called as triple-double network (TDNet). By using the
structure of TDNet, the spatial details of the panchromatic image can be fully
exploited and utilized to progressively inject into the low spatial resolution
multispectral image, thus yielding the high spatial resolution output. The
specific network design is motivated by the physical formula of the traditional
multi-resolution analysis (MRA) methods. Hence, an effective MRA fusion module
is also integrated into the TDNet. Besides, we adopt a few ResNet blocks and
some multi-scale convolution kernels to deepen and widen the network to
effectively enhance the feature extraction and the robustness of the proposed
TDNet. Extensive experiments on reduced- and full-resolution datasets acquired
by WorldView-3, QuickBird, and GaoFen-2 sensors demonstrate the superiority of
the proposed TDNet compared with some recent state-of-the-art pansharpening
approaches. An ablation study has also corroborated the effectiveness of the
proposed approach.
Related papers
- Double-Shot 3D Shape Measurement with a Dual-Branch Network [14.749887303860717]
We propose a dual-branch Convolutional Neural Network (CNN)-Transformer network (PDCNet) to process different structured light (SL) modalities.
Within PDCNet, a Transformer branch is used to capture global perception in the fringe images, while a CNN branch is designed to collect local details in the speckle images.
We show that our method can reduce fringe order ambiguity while producing high-accuracy results on a self-made dataset.
arXiv Detail & Related papers (2024-07-19T10:49:26Z) - DSR-Diff: Depth Map Super-Resolution with Diffusion Model [38.68563026759223]
We present a novel CDSR paradigm that utilizes a diffusion model within the latent space to generate guidance for depth map super-resolution.
Our proposed method has shown superior performance in extensive experiments when compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-11-16T14:18:10Z) - PC-GANs: Progressive Compensation Generative Adversarial Networks for
Pan-sharpening [50.943080184828524]
We propose a novel two-step model for pan-sharpening that sharpens the MS image through the progressive compensation of the spatial and spectral information.
The whole model is composed of triple GANs, and based on the specific architecture, a joint compensation loss function is designed to enable the triple GANs to be trained simultaneously.
arXiv Detail & Related papers (2022-07-29T03:09:21Z) - D3C2-Net: Dual-Domain Deep Convolutional Coding Network for Compressive
Sensing [9.014593915305069]
Deep unfolding networks (DUNs) have achieved impressive success in compressive sensing (CS)
By unfolding the proposed framework into deep neural networks, we further design a novel Dual-Domain Deep Convolutional Coding Network (D3C2-Net)
Experiments on natural and MR images demonstrate that our D3C2-Net achieves higher performance and better accuracy-complexity trade-offs than other state-of-the-arts.
arXiv Detail & Related papers (2022-07-27T14:52:32Z) - Fusformer: A Transformer-based Fusion Approach for Hyperspectral Image
Super-resolution [9.022005574190182]
We design a network based on the transformer for fusing the low-resolution hyperspectral images and high-resolution multispectral images.
Considering the LR-HSIs hold the main spectral structure, the network focuses on the spatial detail estimation.
Various experiments and quality indexes show our approach's superiority compared with other state-of-the-art methods.
arXiv Detail & Related papers (2021-09-05T14:00:34Z) - VolumeFusion: Deep Depth Fusion for 3D Scene Reconstruction [71.83308989022635]
In this paper, we advocate that replicating the traditional two stages framework with deep neural networks improves both the interpretability and the accuracy of the results.
Our network operates in two steps: 1) the local computation of the local depth maps with a deep MVS technique, and, 2) the depth maps and images' features fusion to build a single TSDF volume.
In order to improve the matching performance between images acquired from very different viewpoints, we introduce a rotation-invariant 3D convolution kernel called PosedConv.
arXiv Detail & Related papers (2021-08-19T11:33:58Z) - High-resolution Depth Maps Imaging via Attention-based Hierarchical
Multi-modal Fusion [84.24973877109181]
We propose a novel attention-based hierarchical multi-modal fusion network for guided DSR.
We show that our approach outperforms state-of-the-art methods in terms of reconstruction accuracy, running speed and memory efficiency.
arXiv Detail & Related papers (2021-04-04T03:28:33Z) - Cross-Modality 3D Object Detection [63.29935886648709]
We present a novel two-stage multi-modal fusion network for 3D object detection.
The whole architecture facilitates two-stage fusion.
Our experiments on the KITTI dataset show that the proposed multi-stage fusion helps the network to learn better representations.
arXiv Detail & Related papers (2020-08-16T11:01:20Z) - Hyperspectral Image Super-resolution via Deep Progressive Zero-centric
Residual Learning [62.52242684874278]
Cross-modality distribution of spatial and spectral information makes the problem challenging.
We propose a novel textitlightweight deep neural network-based framework, namely PZRes-Net.
Our framework learns a high resolution and textitzero-centric residual image, which contains high-frequency spatial details of the scene.
arXiv Detail & Related papers (2020-06-18T06:32:11Z) - Gated Fusion Network for Degraded Image Super Resolution [78.67168802945069]
We propose a dual-branch convolutional neural network to extract base features and recovered features separately.
By decomposing the feature extraction step into two task-independent streams, the dual-branch model can facilitate the training process.
arXiv Detail & Related papers (2020-03-02T13:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.