Explaining Multimodal Data Fusion: Occlusion Analysis for Wilderness
Mapping
- URL: http://arxiv.org/abs/2304.02407v1
- Date: Wed, 5 Apr 2023 12:35:02 GMT
- Title: Explaining Multimodal Data Fusion: Occlusion Analysis for Wilderness
Mapping
- Authors: Burak Ekim and Michael Schmitt
- Abstract summary: This study proposes a deep learning framework for the modality-level interpretation of multimodal earth observation data.
We show that the task of wilderness mapping largely benefits from auxiliary data such as land cover and night time light data.
- Score: 2.123635308480885
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Jointly harnessing complementary features of multi-modal input data in a
common latent space has been found to be beneficial long ago. However, the
influence of each modality on the models decision remains a puzzle. This study
proposes a deep learning framework for the modality-level interpretation of
multimodal earth observation data in an end-to-end fashion. While leveraging an
explainable machine learning method, namely Occlusion Sensitivity, the proposed
framework investigates the influence of modalities under an early-fusion
scenario in which the modalities are fused before the learning process. We show
that the task of wilderness mapping largely benefits from auxiliary data such
as land cover and night time light data.
Related papers
- NSF-MAP: Neurosymbolic Multimodal Fusion for Robust and Interpretable Anomaly Prediction in Assembly Pipelines [0.0]
This paper proposes a neurosymbolic AI and fusion-based approach for multimodal anomaly prediction in assembly pipelines.<n>We introduce a time series and image-based fusion model that leverages decision-level fusion techniques.<n>The results demonstrate that a neurosymbolic AI-based fusion approach that uses transfer learning can effectively harness the complementary strengths of time series and image data.
arXiv Detail & Related papers (2025-05-09T16:50:42Z) - SceneGraMMi: Scene Graph-boosted Hybrid-fusion for Multi-Modal Misinformation Veracity Prediction [10.909813689420602]
We propose SceneGraMMi, a Scene Graph-boosted Hybrid-fusion approach for Multi-modal Misinformation veracity prediction.
Experimental results across four benchmark datasets show that SceneGraMMi consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2024-10-20T21:55:13Z) - Deep End-to-End Survival Analysis with Temporal Consistency [49.77103348208835]
We present a novel Survival Analysis algorithm designed to efficiently handle large-scale longitudinal data.
A central idea in our method is temporal consistency, a hypothesis that past and future outcomes in the data evolve smoothly over time.
Our framework uniquely incorporates temporal consistency into large datasets by providing a stable training signal.
arXiv Detail & Related papers (2024-10-09T11:37:09Z) - Supervised Multi-Modal Fission Learning [19.396207029419813]
Learning from multimodal datasets can leverage complementary information and improve performance in prediction tasks.
We propose a Multi-Modal Fission Learning model that simultaneously identifies globally joint, partially joint, and individual components.
arXiv Detail & Related papers (2024-09-30T17:58:03Z) - Ensemble Modeling for Multimodal Visual Action Recognition [50.38638300332429]
We propose an ensemble modeling approach for multimodal action recognition.
We independently train individual modality models using a variant of focal loss tailored to handle the long-tailed distribution of the MECCANO [21] dataset.
arXiv Detail & Related papers (2023-08-10T08:43:20Z) - Learning Unseen Modality Interaction [54.23533023883659]
Multimodal learning assumes all modality combinations of interest are available during training to learn cross-modal correspondences.
We pose the problem of unseen modality interaction and introduce a first solution.
It exploits a module that projects the multidimensional features of different modalities into a common space with rich information preserved.
arXiv Detail & Related papers (2023-06-22T10:53:10Z) - Multimodal Explainability via Latent Shift applied to COVID-19 stratification [0.7831774233149619]
We present a deep architecture, which jointly learns modality reconstructions and sample classifications.
We validate our approach in the context of COVID-19 pandemic using the AIforCOVID dataset.
arXiv Detail & Related papers (2022-12-28T20:07:43Z) - Self-Attention Neural Bag-of-Features [103.70855797025689]
We build on the recently introduced 2D-Attention and reformulate the attention learning methodology.
We propose a joint feature-temporal attention mechanism that learns a joint 2D attention mask highlighting relevant information.
arXiv Detail & Related papers (2022-01-26T17:54:14Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - Enhancing ensemble learning and transfer learning in multimodal data
analysis by adaptive dimensionality reduction [10.646114896709717]
In multimodal data analysis, not all observations would show the same level of reliability or information quality.
We propose an adaptive approach for dimensionality reduction to overcome this issue.
We test our approach on multimodal datasets acquired in diverse research fields.
arXiv Detail & Related papers (2021-05-08T11:53:12Z) - OR-Net: Pointwise Relational Inference for Data Completion under Partial
Observation [51.083573770706636]
This work uses relational inference to fill in the incomplete data.
We propose Omni-Relational Network (OR-Net) to model the pointwise relativity in two aspects.
arXiv Detail & Related papers (2021-05-02T06:05:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.