Fusion of Single and Integral Multispectral Aerial Images
- URL: http://arxiv.org/abs/2311.17515v5
- Date: Wed, 14 Feb 2024 07:52:32 GMT
- Title: Fusion of Single and Integral Multispectral Aerial Images
- Authors: Mohamed Youssef, Oliver Bimber
- Abstract summary: An adequate fusion of the most significant salient information from multiple input channels is essential for many aerial imaging tasks.
We present a first and hybrid architecture for fusing the most significant features from conventional aerial images with the ones from integral aerial images.
We demonstrate examples for search and rescue, wildfire detection, and wildlife observation.
- Score: 1.6317061277457001
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: An adequate fusion of the most significant salient information from multiple
input channels is essential for many aerial imaging tasks. While multispectral
recordings reveal features in various spectral ranges, synthetic aperture
sensing makes occluded features visible. We present a first and hybrid (model-
and learning-based) architecture for fusing the most significant features from
conventional aerial images with the ones from integral aerial images that are
the result of synthetic aperture sensing for removing occlusion. It combines
the environment's spatial references with features of unoccluded targets that
would normally be hidden by dense vegetation. Our method outperforms
state-of-the-art two-channel and multi-channel fusion approaches visually and
quantitatively in common metrics, such as mutual information, visual
information fidelity, and peak signal-to-noise ratio. The proposed model does
not require manually tuned parameters, can be extended to an arbitrary number
and arbitrary combinations of spectral channels, and is reconfigurable for
addressing different use cases. We demonstrate examples for search and rescue,
wildfire detection, and wildlife observation.
Related papers
- A Hybrid Transformer-Mamba Network for Single Image Deraining [70.64069487982916]
Existing deraining Transformers employ self-attention mechanisms with fixed-range windows or along channel dimensions.
We introduce a novel dual-branch hybrid Transformer-Mamba network, denoted as TransMamba, aimed at effectively capturing long-range rain-related dependencies.
arXiv Detail & Related papers (2024-08-31T10:03:19Z) - SSDiff: Spatial-spectral Integrated Diffusion Model for Remote Sensing Pansharpening [14.293042131263924]
We introduce a spatial-spectral integrated diffusion model for the remote sensing pansharpening task, called SSDiff.
SSDiff considers the pansharpening process as the fusion process of spatial and spectral components from the perspective of subspace decomposition.
arXiv Detail & Related papers (2024-04-17T16:30:56Z) - Multi-view Aggregation Network for Dichotomous Image Segmentation [76.75904424539543]
Dichotomous Image (DIS) has recently emerged towards high-precision object segmentation from high-resolution natural images.
Existing methods rely on tedious multiple encoder-decoder streams and stages to gradually complete the global localization and local refinement.
Inspired by it, we model DIS as a multi-view object perception problem and provide a parsimonious multi-view aggregation network (MVANet)
Experiments on the popular DIS-5K dataset show that our MVANet significantly outperforms state-of-the-art methods in both accuracy and speed.
arXiv Detail & Related papers (2024-04-11T03:00:00Z) - A Dual Domain Multi-exposure Image Fusion Network based on the
Spatial-Frequency Integration [57.14745782076976]
Multi-exposure image fusion aims to generate a single high-dynamic image by integrating images with different exposures.
We propose a novelty perspective on multi-exposure image fusion via the Spatial-Frequency Integration Framework, named MEF-SFI.
Our method achieves visual-appealing fusion results against state-of-the-art multi-exposure image fusion approaches.
arXiv Detail & Related papers (2023-12-17T04:45:15Z) - Multi-Spectral Image Stitching via Spatial Graph Reasoning [52.27796682972484]
We propose a spatial graph reasoning based multi-spectral image stitching method.
We embed multi-scale complementary features from the same view position into a set of nodes.
By introducing long-range coherence along spatial and channel dimensions, the complementarity of pixel relations and channel interdependencies aids in the reconstruction of aligned multi-view features.
arXiv Detail & Related papers (2023-07-31T15:04:52Z) - Dif-Fusion: Towards High Color Fidelity in Infrared and Visible Image
Fusion with Diffusion Models [54.952979335638204]
We propose a novel method with diffusion models, termed as Dif-Fusion, to generate the distribution of the multi-channel input data.
Our method is more effective than other state-of-the-art image fusion methods, especially in color fidelity.
arXiv Detail & Related papers (2023-01-19T13:37:19Z) - Attention-Based Scattering Network for Satellite Imagery [0.0]
We leverage the scattering to extract high-level features without additional trainable parameters.
Experiments show promising results on estimating tropical cyclone intensity and predicting the occurrence of lightning from satellite imagery.
arXiv Detail & Related papers (2022-10-21T18:25:34Z) - Target-aware Dual Adversarial Learning and a Multi-scenario
Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection [65.30079184700755]
This study addresses the issue of fusing infrared and visible images that appear differently for object detection.
Previous approaches discover commons underlying the two modalities and fuse upon the common space either by iterative optimization or deep networks.
This paper proposes a bilevel optimization formulation for the joint problem of fusion and detection, and then unrolls to a target-aware Dual Adversarial Learning (TarDAL) network for fusion and a commonly used detection network.
arXiv Detail & Related papers (2022-03-30T11:44:56Z) - Multispectral Fusion for Object Detection with Cyclic Fuse-and-Refine
Blocks [3.6488662460683794]
We propose a new halfway feature fusion method for neural networks that leverages the complementary/consistency balance existing in multispectral features.
We evaluate the effectiveness of our fusion method on two challenging multispectral datasets for object detection.
arXiv Detail & Related papers (2020-09-26T18:39:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.