Evaluating explainability for machine learning predictions using
model-agnostic metrics
- URL: http://arxiv.org/abs/2302.12094v2
- Date: Mon, 29 Jan 2024 18:56:08 GMT
- Title: Evaluating explainability for machine learning predictions using
model-agnostic metrics
- Authors: Cristian Munoz, Kleyton da Costa, Bernardo Modenesi, Adriano Koshiyama
- Abstract summary: We present novel metrics to quantify the degree of which AI model predictions can be easily explainable by its features.
Our metrics summarize different aspects of explainability into scalars, providing a more comprehensive understanding of model predictions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Rapid advancements in artificial intelligence (AI) technology have brought
about a plethora of new challenges in terms of governance and regulation. AI
systems are being integrated into various industries and sectors, creating a
demand from decision-makers to possess a comprehensive and nuanced
understanding of the capabilities and limitations of these systems. One
critical aspect of this demand is the ability to explain the results of machine
learning models, which is crucial to promoting transparency and trust in AI
systems, as well as fundamental in helping machine learning models to be
trained ethically. In this paper, we present novel metrics to quantify the
degree of which AI model predictions can be easily explainable by its features.
Our metrics summarize different aspects of explainability into scalars,
providing a more comprehensive understanding of model predictions and
facilitating communication between decision-makers and stakeholders, thereby
increasing the overall transparency and accountability of AI systems.
Related papers
- Mechanistic Interpretability for AI Safety -- A Review [28.427951836334188]
This review explores mechanistic interpretability.
Mechanistic interpretability could help prevent catastrophic outcomes as AI systems become more powerful and inscrutable.
arXiv Detail & Related papers (2024-04-22T11:01:51Z) - Explainable Artificial Intelligence Techniques for Accurate Fault Detection and Diagnosis: A Review [0.0]
We review the eXplainable AI (XAI) tools and techniques in this context.
We focus on their role in making AI decision-making transparent, particularly in critical scenarios where humans are involved.
We discuss current limitations and potential future research that aims to balance explainability with model performance.
arXiv Detail & Related papers (2024-04-17T17:49:38Z) - Emergent Explainability: Adding a causal chain to neural network
inference [0.0]
This position paper presents a theoretical framework for enhancing explainable artificial intelligence (xAI) through emergent communication (EmCom)
We explore the novel integration of EmCom into AI systems, offering a paradigm shift from conventional associative relationships between inputs and outputs to a more nuanced, causal interpretation.
The paper discusses the theoretical underpinnings of this approach, its potential broad applications, and its alignment with the growing need for responsible and transparent AI systems.
arXiv Detail & Related papers (2024-01-29T02:28:39Z) - Interpretable and Explainable Machine Learning Methods for Predictive
Process Monitoring: A Systematic Literature Review [1.3812010983144802]
This paper presents a systematic review on the explainability and interpretability of machine learning (ML) models within the context of predictive process mining.
We provide a comprehensive overview of the current methodologies and their applications across various application domains.
Our findings aim to equip researchers and practitioners with a deeper understanding of how to develop and implement more trustworthy, transparent, and effective intelligent systems for process analytics.
arXiv Detail & Related papers (2023-12-29T12:43:43Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Vision Paper: Causal Inference for Interpretable and Robust Machine
Learning in Mobility Analysis [71.2468615993246]
Building intelligent transportation systems requires an intricate combination of artificial intelligence and mobility analysis.
The past few years have seen rapid development in transportation applications using advanced deep neural networks.
This vision paper emphasizes research challenges in deep learning-based mobility analysis that require interpretability and robustness.
arXiv Detail & Related papers (2022-10-18T17:28:58Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Individual Explanations in Machine Learning Models: A Survey for
Practitioners [69.02688684221265]
The use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise.
Many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways.
Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models.
arXiv Detail & Related papers (2021-04-09T01:46:34Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.