Quantitative Analysis of Primary Attribution Explainable Artificial
Intelligence Methods for Remote Sensing Image Classification
- URL: http://arxiv.org/abs/2306.04037v2
- Date: Mon, 4 Dec 2023 23:18:54 GMT
- Title: Quantitative Analysis of Primary Attribution Explainable Artificial
Intelligence Methods for Remote Sensing Image Classification
- Authors: Akshatha Mohan and Joshua Peeples
- Abstract summary: We leverage state-of-the-art machine learning approaches to perform remote sensing image classification.
We offer insights and recommendations for selecting the most appropriate XAI method.
- Score: 0.4532517021515834
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a comprehensive analysis of quantitatively evaluating explainable
artificial intelligence (XAI) techniques for remote sensing image
classification. Our approach leverages state-of-the-art machine learning
approaches to perform remote sensing image classification across multiple
modalities. We investigate the results of the models qualitatively through XAI
methods. Additionally, we compare the XAI methods quantitatively through
various categories of desired properties. Through our analysis, we offer
insights and recommendations for selecting the most appropriate XAI method(s)
to gain a deeper understanding of the models' decision-making processes. The
code for this work is publicly available.
Related papers
- AI-Aided Kalman Filters [65.35350122917914]
The Kalman filter (KF) and its variants are among the most celebrated algorithms in signal processing.
Recent developments illustrate the possibility of fusing deep neural networks (DNNs) with classic Kalman-type filtering.
This article provides a tutorial-style overview of design approaches for incorporating AI in aiding KF-type algorithms.
arXiv Detail & Related papers (2024-10-16T06:47:53Z) - Extending CAM-based XAI methods for Remote Sensing Imagery Segmentation [7.735470452949379]
We introduce a new XAI evaluation methodology and metric based on "Entropy" to measure the model uncertainty.
We show that using Entropy to monitor the model uncertainty in segmenting the pixels within the target class is more suitable.
arXiv Detail & Related papers (2023-10-03T07:01:23Z) - Strategies to exploit XAI to improve classification systems [0.0]
XAI aims to provide insights into the decision-making process of AI models, allowing users to understand their results beyond their decisions.
Most XAI literature focuses on how to explain an AI system, while less attention has been given to how XAI methods can be exploited to improve an AI system.
arXiv Detail & Related papers (2023-06-09T10:38:26Z) - An Experimental Investigation into the Evaluation of Explainability
Methods [60.54170260771932]
This work compares 14 different metrics when applied to nine state-of-the-art XAI methods and three dummy methods (e.g., random saliency maps) used as references.
Experimental results show which of these metrics produces highly correlated results, indicating potential redundancy.
arXiv Detail & Related papers (2023-05-25T08:07:07Z) - Model-Based Deep Learning: On the Intersection of Deep Learning and
Optimization [101.32332941117271]
Decision making algorithms are used in a multitude of different applications.
Deep learning approaches that use highly parametric architectures tuned from data without relying on mathematical models are becoming increasingly popular.
Model-based optimization and data-centric deep learning are often considered to be distinct disciplines.
arXiv Detail & Related papers (2022-05-05T13:40:08Z) - Explainable Analysis of Deep Learning Methods for SAR Image
Classification [11.861924367762033]
We utilize explainable artificial intelligence (XAI) methods for the SAR image classification task.
We trained state-of-the-art convolutional neural networks for each polarization format on OpenSARUrban dataset.
Occlusion achieves the most reliable interpretation performance in terms of Max-Sensitivity but with a low-resolution explanation heatmap.
arXiv Detail & Related papers (2022-04-14T06:42:21Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Image Quality Assessment in the Modern Age [53.19271326110551]
This tutorial provides the audience with the basic theories, methodologies, and current progresses of image quality assessment (IQA)
We will first revisit several subjective quality assessment methodologies, with emphasis on how to properly select visual stimuli.
Both hand-engineered and (deep) learning-based methods will be covered.
arXiv Detail & Related papers (2021-10-19T02:38:46Z) - Evaluating Explainable Artificial Intelligence Methods for Multi-label
Deep Learning Classification Tasks in Remote Sensing [0.0]
We develop deep learning models with state-of-the-art performance in benchmark datasets.
Ten XAI methods were employed towards understanding and interpreting models' predictions.
Occlusion, Grad-CAM and Lime were the most interpretable and reliable XAI methods.
arXiv Detail & Related papers (2021-04-03T11:13:14Z) - An Empirical Study of Explainable AI Techniques on Deep Learning Models
For Time Series Tasks [18.70973390984415]
Decision explanations of machine learning black-box models are often generated by applying Explainable AI (XAI) techniques.
Evaluation and verification are usually achieved with a visual interpretation by humans on individual images or text.
We propose an empirical study and benchmark framework to apply attribution methods for neural networks developed for images and text data on time series.
arXiv Detail & Related papers (2020-12-08T10:33:57Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.