Multi-modal land cover mapping of remote sensing images using pyramid
attention and gated fusion networks
- URL: http://arxiv.org/abs/2111.03845v1
- Date: Sat, 6 Nov 2021 10:01:01 GMT
- Title: Multi-modal land cover mapping of remote sensing images using pyramid
attention and gated fusion networks
- Authors: Qinghui Liu, Michael Kampffmeyer, Robert Jenssen and Arnt-B{\o}rre
Salberg
- Abstract summary: We propose a new multi-modality network for land cover mapping of multi-modal remote sensing data based on a novel pyramid attention fusion (PAF) module and a gated fusion unit (GFU)
PAF module is designed to efficiently obtain rich fine-grained contextual representations from each modality with a built-in cross-level and cross-view attention fusion mechanism.
GFU module utilizes a novel gating mechanism for early merging of features, thereby diminishing hidden redundancies and noise.
- Score: 20.66034058363032
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Multi-modality data is becoming readily available in remote sensing (RS) and
can provide complementary information about the Earth's surface. Effective
fusion of multi-modal information is thus important for various applications in
RS, but also very challenging due to large domain differences, noise, and
redundancies. There is a lack of effective and scalable fusion techniques for
bridging multiple modality encoders and fully exploiting complementary
information. To this end, we propose a new multi-modality network (MultiModNet)
for land cover mapping of multi-modal remote sensing data based on a novel
pyramid attention fusion (PAF) module and a gated fusion unit (GFU). The PAF
module is designed to efficiently obtain rich fine-grained contextual
representations from each modality with a built-in cross-level and cross-view
attention fusion mechanism, and the GFU module utilizes a novel gating
mechanism for early merging of features, thereby diminishing hidden
redundancies and noise. This enables supplementary modalities to effectively
extract the most valuable and complementary information for late feature
fusion. Extensive experiments on two representative RS benchmark datasets
demonstrate the effectiveness, robustness, and superiority of the MultiModNet
for multi-modal land cover classification.
Related papers
- A Semantic-Aware and Multi-Guided Network for Infrared-Visible Image Fusion [41.34335755315773]
Multi-modality image fusion aims at fusing specific-modality and shared-modality information from two source images.
We propose a three-branch encoder-decoder architecture along with corresponding fusion layers as the fusion strategy.
Our method has obtained competitive results compared with state-of-the-art methods in visible/infrared image fusion and medical image fusion tasks.
arXiv Detail & Related papers (2024-06-11T09:32:40Z) - LMFNet: An Efficient Multimodal Fusion Approach for Semantic Segmentation in High-Resolution Remote Sensing [25.016421338677816]
Current methods often process only two types of data, missing out on the rich information that additional modalities can provide.
We propose a novel textbfLightweight textbfMultimodal data textbfFusion textbfNetwork (LMFNet)
LMFNet accommodates various data types simultaneously, including RGB, NirRG, and DSM, through a weight-sharing, multi-branch vision transformer.
arXiv Detail & Related papers (2024-04-21T13:29:42Z) - Fusion-Mamba for Cross-modality Object Detection [63.56296480951342]
Cross-modality fusing information from different modalities effectively improves object detection performance.
We design a Fusion-Mamba block (FMB) to map cross-modal features into a hidden state space for interaction.
Our proposed approach outperforms the state-of-the-art methods on $m$AP with 5.9% on $M3FD$ and 4.9% on FLIR-Aligned datasets.
arXiv Detail & Related papers (2024-04-14T05:28:46Z) - Multimodal Informative ViT: Information Aggregation and Distribution for
Hyperspectral and LiDAR Classification [25.254816993934746]
Multimodal Informative Vit (MIVit) is a system with an innovative information aggregate-distributing mechanism.
MIVit reduces redundancy in the empirical distribution of each modality's separate and fused features.
Our results show that MIVit's bidirectional aggregate-distributing mechanism is highly effective.
arXiv Detail & Related papers (2024-01-06T09:53:33Z) - HiDAnet: RGB-D Salient Object Detection via Hierarchical Depth Awareness [2.341385717236931]
We propose a novel Hierarchical Depth Awareness network (HiDAnet) for RGB-D saliency detection.
Our motivation comes from the observation that the multi-granularity properties of geometric priors correlate well with the neural network hierarchies.
Our HiDAnet performs favorably over the state-of-the-art methods by large margins.
arXiv Detail & Related papers (2023-01-18T10:00:59Z) - Transformer-based Network for RGB-D Saliency Detection [82.6665619584628]
Key to RGB-D saliency detection is to fully mine and fuse information at multiple scales across the two modalities.
We show that transformer is a uniform operation which presents great efficacy in both feature fusion and feature enhancement.
Our proposed network performs favorably against state-of-the-art RGB-D saliency detection methods.
arXiv Detail & Related papers (2021-12-01T15:53:58Z) - Specificity-preserving RGB-D Saliency Detection [103.3722116992476]
We propose a specificity-preserving network (SP-Net) for RGB-D saliency detection.
Two modality-specific networks and a shared learning network are adopted to generate individual and shared saliency maps.
Experiments on six benchmark datasets demonstrate that our SP-Net outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2021-08-18T14:14:22Z) - Learning Deep Multimodal Feature Representation with Asymmetric
Multi-layer Fusion [63.72912507445662]
We propose a compact and effective framework to fuse multimodal features at multiple layers in a single network.
We verify that multimodal features can be learnt within a shared single network by merely maintaining modality-specific batch normalization layers in the encoder.
Secondly, we propose a bidirectional multi-layer fusion scheme, where multimodal features can be exploited progressively.
arXiv Detail & Related papers (2021-08-11T03:42:13Z) - Accelerated Multi-Modal MR Imaging with Transformers [92.18406564785329]
We propose a multi-modal transformer (MTrans) for accelerated MR imaging.
By restructuring the transformer architecture, our MTrans gains a powerful ability to capture deep multi-modal information.
Our framework provides two appealing benefits: (i) MTrans is the first attempt at using improved transformers for multi-modal MR imaging, affording more global information compared with CNN-based methods.
arXiv Detail & Related papers (2021-06-27T15:01:30Z) - MSAF: Multimodal Split Attention Fusion [6.460517449962825]
We propose a novel multimodal fusion module that learns to emphasize more contributive features across all modalities.
Our approach achieves competitive results in each task and outperforms other application-specific networks and multimodal fusion benchmarks.
arXiv Detail & Related papers (2020-12-13T22:42:41Z) - Efficient Human Pose Estimation by Learning Deeply Aggregated
Representations [67.24496300046255]
We propose an efficient human pose estimation network (DANet) by learning deeply aggregated representations.
Our networks could achieve comparable or even better accuracy with much smaller model complexity.
arXiv Detail & Related papers (2020-12-13T10:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.