Opening the Black-Box: A Systematic Review on Explainable AI in Remote
Sensing
- URL: http://arxiv.org/abs/2402.13791v1
- Date: Wed, 21 Feb 2024 13:19:58 GMT
- Title: Opening the Black-Box: A Systematic Review on Explainable AI in Remote
Sensing
- Authors: Adrian H\"ohl, Ivica Obadic, Miguel \'Angel Fern\'andez Torres, Hiba
Najjar, Dario Oliveira, Zeynep Akata, Andreas Dengel, Xiao Xiang Zhu
- Abstract summary: Black-box machine learning approaches have become a dominant modeling paradigm for knowledge extraction in Remote Sensing.
We perform a systematic review to identify the key trends of how explainable AI is used in Remote Sensing.
We shed light on novel explainable AI approaches and emerging directions that tackle specific Remote Sensing challenges.
- Score: 52.110707276938
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, black-box machine learning approaches have become a dominant
modeling paradigm for knowledge extraction in Remote Sensing. Despite the
potential benefits of uncovering the inner workings of these models with
explainable AI, a comprehensive overview summarizing the used explainable AI
methods and their objectives, findings, and challenges in Remote Sensing
applications is still missing. In this paper, we address this issue by
performing a systematic review to identify the key trends of how explainable AI
is used in Remote Sensing and shed light on novel explainable AI approaches and
emerging directions that tackle specific Remote Sensing challenges. We also
reveal the common patterns of explanation interpretation, discuss the extracted
scientific insights in Remote Sensing, and reflect on the approaches used for
explainable AI methods evaluation. Our review provides a complete summary of
the state-of-the-art in the field. Further, we give a detailed outlook on the
challenges and promising research directions, representing a basis for novel
methodological development and a useful starting point for new researchers in
the field of explainable AI in Remote Sensing.
Related papers
- Gradient based Feature Attribution in Explainable AI: A Technical Review [13.848675695545909]
Surge in black-box AI models has prompted the need to explain the internal mechanism and justify their reliability.
gradient based explanations can be directly adopted for neural network models.
We introduce both human and quantitative evaluations to measure algorithm performance.
arXiv Detail & Related papers (2024-03-15T15:49:31Z) - Open-world machine learning: A review and new outlooks [117.33922838201993]
Article presents a holistic view of open-world machine learning.<n>It investigates unknown rejection, novelty discovery, and continual learning.<n>It aims to help researchers build more powerful AI systems in their respective fields.
arXiv Detail & Related papers (2024-03-04T06:25:26Z) - Representation Engineering: A Top-Down Approach to AI Transparency [132.0398250233924]
We identify and characterize the emerging area of representation engineering (RepE)
RepE places population-level representations, rather than neurons or circuits, at the center of analysis.
We showcase how these methods can provide traction on a wide range of safety-relevant problems.
arXiv Detail & Related papers (2023-10-02T17:59:07Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - A Survey of Explainable AI in Deep Visual Modeling: Methods and Metrics [24.86176236641865]
We present the first survey in Explainable AI that focuses on the methods and metrics for interpreting deep visual models.
Covering the landmark contributions along the state-of-the-art, we not only provide a taxonomic organization of the existing techniques, but also excavate a range of evaluation metrics.
arXiv Detail & Related papers (2023-01-31T06:49:42Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - A Critical Review of Inductive Logic Programming Techniques for
Explainable AI [9.028858411921906]
Inductive Logic Programming (ILP) is a subfield of symbolic artificial intelligence.
ILP generates explainable first-order clausal theories from examples and background knowledge.
Existing ILP systems often have a vast solution space, and the induced solutions are very sensitive to noises and disturbances.
arXiv Detail & Related papers (2021-12-31T06:34:32Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Explaining Deep Neural Networks and Beyond: A Review of Methods and
Applications [12.239046765871109]
Interpretability and explanation methods for gaining a better understanding about the problem solving abilities and strategies of nonlinear Machine Learning are receiving increased attention.
We provide a timely overview of this active emerging field, with a focus on 'post-hoc' explanations, and explain its theoretical foundations.
We discuss challenges and possible future directions of this exciting foundational field of machine learning.
arXiv Detail & Related papers (2020-03-17T10:45:51Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.