A systematic review and meta-analysis of Digital Elevation Model (DEM)
fusion: pre-processing, methods and applications
- URL: http://arxiv.org/abs/2203.15026v1
- Date: Mon, 28 Mar 2022 18:39:14 GMT
- Title: A systematic review and meta-analysis of Digital Elevation Model (DEM)
fusion: pre-processing, methods and applications
- Authors: Chukwuma Okolie and Julian Smit
- Abstract summary: 2.5D/3D Digital Elevation Model (DEM) fusion is a key application of data fusion in remote sensing.
DEM fusion takes advantage of the complementary characteristics of multi-source DEMs to deliver a more complete, accurate and reliable dataset.
This paper provides a systematic review of DEM fusion: the pre-processing workflow, methods and applications, enhanced with a meta-analysis.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The remote sensing community has identified data fusion as one of the key
challenging topics of the 21st century. The subject of image fusion in
two-dimensional (2D) space has been covered in several published reviews.
However, the special case of 2.5D/3D Digital Elevation Model (DEM) fusion has
not been addressed till date. DEM fusion is a key application of data fusion in
remote sensing. It takes advantage of the complementary characteristics of
multi-source DEMs to deliver a more complete, accurate and reliable elevation
dataset. Although several methods for fusing DEMs have been developed, the
absence of a well-rounded review has limited their proliferation among
researchers and end-users. It is often required to combine knowledge from
multiple studies to inform a holistic perspective and guide further research.
In response, this paper provides a systematic review of DEM fusion: the
pre-processing workflow, methods and applications, enhanced with a
meta-analysis. Through the discussion and comparative analysis, unresolved
challenges and open issues were identified, and future directions for research
were proposed. This review is a timely solution and an invaluable source of
information for researchers within the fields of remote sensing and spatial
information science, and the data fusion community at large.
Related papers
- RADAR: Robust Two-stage Modality-incomplete Industrial Anomaly Detection [61.71770293720491]
We propose a novel two-stage Robust modAlity-imcomplete fusing and Detecting frAmewoRk, abbreviated as RADAR.
Our bootstrapping philosophy is to enhance two stages in MIIAD, improving the robustness of the Multimodal Transformer.
Our experimental results demonstrate that the proposed RADAR significantly surpasses conventional MIAD methods in terms of effectiveness and robustness.
arXiv Detail & Related papers (2024-10-02T16:47:55Z) - mmFUSION: Multimodal Fusion for 3D Objects Detection [18.401155770778757]
Multi-sensor fusion is essential for accurate 3D object detection in self-driving systems.
In this paper, we propose a new intermediate-level multi-modal fusion approach to overcome these challenges.
The code with the mmdetection3D project plugin will be publicly available soon.
arXiv Detail & Related papers (2023-11-07T15:11:27Z) - Deep Model Fusion: A Survey [37.39100741978586]
Deep model fusion/merging is an emerging technique that merges the parameters or predictions of multiple deep learning models into a single one.
It faces several challenges, including high computational cost, high-dimensional parameter space, interference between different heterogeneous models, etc.
arXiv Detail & Related papers (2023-09-27T14:40:12Z) - Deep Equilibrium Multimodal Fusion [88.04713412107947]
Multimodal fusion integrates the complementary information present in multiple modalities and has gained much attention recently.
We propose a novel deep equilibrium (DEQ) method towards multimodal fusion via seeking a fixed point of the dynamic multimodal fusion process.
Experiments on BRCA, MM-IMDB, CMU-MOSI, SUN RGB-D, and VQA-v2 demonstrate the superiority of our DEQ fusion.
arXiv Detail & Related papers (2023-06-29T03:02:20Z) - A Task-guided, Implicitly-searched and Meta-initialized Deep Model for
Image Fusion [69.10255211811007]
We present a Task-guided, Implicit-searched and Meta- generalizationd (TIM) deep model to address the image fusion problem in a challenging real-world scenario.
Specifically, we propose a constrained strategy to incorporate information from downstream tasks to guide the unsupervised learning process of image fusion.
Within this framework, we then design an implicit search scheme to automatically discover compact architectures for our fusion model with high efficiency.
arXiv Detail & Related papers (2023-05-25T08:54:08Z) - Deep Learning in Multimodal Remote Sensing Data Fusion: A Comprehensive
Review [33.40031994803646]
This survey aims to present a systematic overview in DL-based multimodal RS data fusion.
Sub-fields in the multimodal RS data fusion are reviewed in terms of to-be-fused data modalities.
The remaining challenges and potential future directions are highlighted.
arXiv Detail & Related papers (2022-05-03T09:08:16Z) - Target-aware Dual Adversarial Learning and a Multi-scenario
Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection [65.30079184700755]
This study addresses the issue of fusing infrared and visible images that appear differently for object detection.
Previous approaches discover commons underlying the two modalities and fuse upon the common space either by iterative optimization or deep networks.
This paper proposes a bilevel optimization formulation for the joint problem of fusion and detection, and then unrolls to a target-aware Dual Adversarial Learning (TarDAL) network for fusion and a commonly used detection network.
arXiv Detail & Related papers (2022-03-30T11:44:56Z) - Paradigm selection for Data Fusion of SAR and Multispectral Sentinel
data applied to Land-Cover Classification [63.072664304695465]
In this letter, four data fusion paradigms, based on Convolutional Neural Networks (CNNs) are analyzed and implemented.
The goals are to provide a systematic procedure for choosing the best data fusion framework, resulting in the best classification results.
The procedure has been validated for land-cover classification but it can be transferred to other cases.
arXiv Detail & Related papers (2021-06-18T11:36:54Z) - D-SRGAN: DEM Super-Resolution with Generative Adversarial Networks [0.0]
LIDAR data has been used as the primary source of Digital Elevation Models (DEMs)
DEMs have been used in a variety of applications like road extraction, hydrological modeling, flood mapping, and surface analysis.
Deep learning techniques have become attractive to researchers for their performance in learning features from high-resolution datasets.
arXiv Detail & Related papers (2020-04-09T19:57:49Z) - Learning Selective Sensor Fusion for States Estimation [47.76590539558037]
We propose SelectFusion, an end-to-end selective sensor fusion module.
During prediction, the network is able to assess the reliability of the latent features from different sensor modalities.
We extensively evaluate all fusion strategies in both public datasets and on progressively degraded datasets.
arXiv Detail & Related papers (2019-12-30T20:25:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.