Explainable GeoAI: Can saliency maps help interpret artificial
intelligence's learning process? An empirical study on natural feature
detection
- URL: http://arxiv.org/abs/2303.09660v1
- Date: Thu, 16 Mar 2023 21:37:29 GMT
- Title: Explainable GeoAI: Can saliency maps help interpret artificial
intelligence's learning process? An empirical study on natural feature
detection
- Authors: Chia-Yu Hsu and Wenwen Li
- Abstract summary: This paper compares popular saliency map generation techniques and their strengths and weaknesses in interpreting GeoAI and deep learning models' reasoning behaviors.
The experiments used two GeoAI-ready datasets to demonstrate the generalizability of the research findings.
- Score: 4.52308938611108
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Improving the interpretability of geospatial artificial intelligence (GeoAI)
models has become critically important to open the "black box" of complex AI
models, such as deep learning. This paper compares popular saliency map
generation techniques and their strengths and weaknesses in interpreting GeoAI
and deep learning models' reasoning behaviors, particularly when applied to
geospatial analysis and image processing tasks. We surveyed two broad classes
of model explanation methods: perturbation-based and gradient-based methods.
The former identifies important image areas, which help machines make
predictions by modifying a localized area of the input image. The latter
evaluates the contribution of every single pixel of the input image to the
model's prediction results through gradient backpropagation. In this study,
three algorithms-the occlusion method, the integrated gradients method, and the
class activation map method-are examined for a natural feature detection task
using deep learning. The algorithms' strengths and weaknesses are discussed,
and the consistency between model-learned and human-understandable concepts for
object recognition is also compared. The experiments used two GeoAI-ready
datasets to demonstrate the generalizability of the research findings.
Related papers
- Underwater Object Detection in the Era of Artificial Intelligence: Current, Challenge, and Future [119.88454942558485]
Underwater object detection (UOD) aims to identify and localise objects in underwater images or videos.
In recent years, artificial intelligence (AI) based methods, especially deep learning methods, have shown promising performance in UOD.
arXiv Detail & Related papers (2024-10-08T00:25:33Z) - Underwater SONAR Image Classification and Analysis using LIME-based Explainable Artificial Intelligence [0.0]
This paper explores the application of the eXplainable Artificial Intelligence (XAI) tool to interpret the underwater image classification results.
An extensive analysis of transfer learning techniques for image classification using benchmark Convolutional Neural Network (CNN) architectures is carried out.
XAI techniques highlight interpretability of the results in a more human-compliant way, thus boosting our confidence and reliability.
arXiv Detail & Related papers (2024-08-23T04:54:18Z) - Extending CAM-based XAI methods for Remote Sensing Imagery Segmentation [7.735470452949379]
We introduce a new XAI evaluation methodology and metric based on "Entropy" to measure the model uncertainty.
We show that using Entropy to monitor the model uncertainty in segmenting the pixels within the target class is more suitable.
arXiv Detail & Related papers (2023-10-03T07:01:23Z) - Assessment of a new GeoAI foundation model for flood inundation mapping [4.312965283062856]
This paper evaluates the performance of the first-of-its-kind geospatial foundation model, IBM-NASA's Prithvi, to support a crucial geospatial analysis task: flood inundation mapping.
A benchmark dataset, Sen1Floods11, is used in the experiments, and the models' predictability, generalizability, and transferability are evaluated.
Results show the good transferability of the Prithvi model, highlighting its performance advantages in segmenting flooded areas in previously unseen regions.
arXiv Detail & Related papers (2023-09-25T19:50:47Z) - Generalizing Backpropagation for Gradient-Based Interpretability [103.2998254573497]
We show that the gradient of a model is a special case of a more general formulation using semirings.
This observation allows us to generalize the backpropagation algorithm to efficiently compute other interpretable statistics.
arXiv Detail & Related papers (2023-07-06T15:19:53Z) - Human Attention-Guided Explainable Artificial Intelligence for Computer
Vision Models [38.50257023156464]
We examined whether embedding human attention knowledge into saliency-based explainable AI (XAI) methods could enhance their plausibility and faithfulness.
We first developed new gradient-based XAI methods for object detection models to generate object-specific explanations.
We then developed Human Attention-Guided XAI to learn from human attention how to best combine explanatory information from the models to enhance explanation plausibility.
arXiv Detail & Related papers (2023-05-05T15:05:07Z) - Evaluation Challenges for Geospatial ML [5.576083740549639]
Geospatial machine learning models and maps are increasingly used for downstream analyses in science and policy.
The correct way to measure performance of spatial machine learning outputs has been a topic of debate.
This paper delineates unique challenges of model evaluation for geospatial machine learning with global or remotely sensed datasets.
arXiv Detail & Related papers (2023-03-31T14:24:06Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
Mapping for image saliency [61.40511574314069]
Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input.
We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors.
arXiv Detail & Related papers (2021-06-20T08:20:56Z) - Neural Topological SLAM for Visual Navigation [112.73876869904]
We design topological representations for space that leverage semantics and afford approximate geometric reasoning.
We describe supervised learning-based algorithms that can build, maintain and use such representations under noisy actuation.
arXiv Detail & Related papers (2020-05-25T17:56:29Z) - Structured Landmark Detection via Topology-Adapting Deep Graph Learning [75.20602712947016]
We present a new topology-adapting deep graph learning approach for accurate anatomical facial and medical landmark detection.
The proposed method constructs graph signals leveraging both local image features and global shape features.
Experiments are conducted on three public facial image datasets (WFLW, 300W, and COFW-68) as well as three real-world X-ray medical datasets (Cephalometric (public), Hand and Pelvis)
arXiv Detail & Related papers (2020-04-17T11:55:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.