An Objective Metric for Explainable AI: How and Why to Estimate the
Degree of Explainability
- URL: http://arxiv.org/abs/2109.05327v1
- Date: Sat, 11 Sep 2021 17:44:13 GMT
- Title: An Objective Metric for Explainable AI: How and Why to Estimate the
Degree of Explainability
- Authors: Francesco Sovrano, Fabio Vitali
- Abstract summary: We present a new model-agnostic metric to measure the Degree of eXplainability of correct information in an objective way.
We designed a few experiments and a user-study on two realistic AI-based systems for healthcare and finance.
- Score: 3.04585143845864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Numerous government initiatives (e.g. the EU with GDPR) are coming to the
conclusion that the increasing complexity of modern software systems must be
contrasted with some Rights to Explanation and metrics for the Impact
Assessment of these tools, that allow humans to understand and oversee the
output of Automated Decision Making systems. Explainable AI was born as a
pathway to allow humans to explore and understand the inner working of complex
systems. But establishing what is an explanation and objectively evaluating
explainability, are not trivial tasks. With this paper, we present a new
model-agnostic metric to measure the Degree of eXplainability of correct
information in an objective way, exploiting a specific model from Ordinary
Language Philosophy called the Achinstein's Theory of Explanations. In order to
understand whether this metric is actually behaving as explainability is
expected to, we designed a few experiments and a user-study on two realistic
AI-based systems for healthcare and finance, involving famous AI technology
including Artificial Neural Networks and TreeSHAP. The results we obtained are
very encouraging, suggesting that our proposed metric for measuring the Degree
of eXplainability is robust on several scenarios and it can be eventually
exploited for a lawful Impact Assessment of an Automated Decision Making
system.
Related papers
- Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Requirements for Explainability and Acceptance of Artificial
Intelligence in Collaborative Work [0.0]
The present structured literature analysis examines the requirements for the explainability and acceptance of AI.
Results indicate that the two main groups of users are developers who require information about the internal operations of the model.
The acceptance of AI systems depends on information about the system's functions and performance, privacy and ethical considerations.
arXiv Detail & Related papers (2023-06-27T11:36:07Z) - Towards Reconciling Usability and Usefulness of Explainable AI
Methodologies [2.715884199292287]
Black-box AI systems can lead to liability and accountability issues when they produce an incorrect decision.
Explainable AI (XAI) seeks to bridge the knowledge gap, between developers and end-users.
arXiv Detail & Related papers (2023-01-13T01:08:49Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Neural Causal Models for Counterfactual Identification and Estimation [62.30444687707919]
We study the evaluation of counterfactual statements through neural models.
First, we show that neural causal models (NCMs) are expressive enough.
Second, we develop an algorithm for simultaneously identifying and estimating counterfactual distributions.
arXiv Detail & Related papers (2022-09-30T18:29:09Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - LioNets: A Neural-Specific Local Interpretation Technique Exploiting
Penultimate Layer Information [6.570220157893279]
Interpretable machine learning (IML) is an urgent topic of research.
This paper focuses on a local-based, neural-specific interpretation process applied to textual and time-series data.
arXiv Detail & Related papers (2021-04-13T09:39:33Z) - The role of explainability in creating trustworthy artificial
intelligence for health care: a comprehensive survey of the terminology,
design choices, and evaluation strategies [1.2762298148425795]
Lack of transparency is identified as one of the main barriers to implementation of AI systems in health care.
We review the recent literature to provide guidance to researchers and practitioners on the design of explainable AI systems.
We conclude that explainable modelling can contribute to trustworthy AI, but the benefits of explainability still need to be proven in practice.
arXiv Detail & Related papers (2020-07-31T09:08:27Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.