How to choose an Explainability Method? Towards a Methodical
Implementation of XAI in Practice
- URL: http://arxiv.org/abs/2107.04427v1
- Date: Fri, 9 Jul 2021 13:22:58 GMT
- Title: How to choose an Explainability Method? Towards a Methodical
Implementation of XAI in Practice
- Authors: Tom Vermeire and Thibault Laugel and Xavier Renard and David Martens
and Marcin Detyniecki
- Abstract summary: We argue there is a need for a methodology to bridge the gap between stakeholder needs and explanation methods.
We present our ongoing work on creating this methodology to help data scientists in the process of providing explainability to stakeholders.
- Score: 3.974102831754831
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainability is becoming an important requirement for organizations that
make use of automated decision-making due to regulatory initiatives and a shift
in public awareness. Various and significantly different algorithmic methods to
provide this explainability have been introduced in the field, but the existing
literature in the machine learning community has paid little attention to the
stakeholder whose needs are rather studied in the human-computer interface
community. Therefore, organizations that want or need to provide this
explainability are confronted with the selection of an appropriate method for
their use case. In this paper, we argue there is a need for a methodology to
bridge the gap between stakeholder needs and explanation methods. We present
our ongoing work on creating this methodology to help data scientists in the
process of providing explainability to stakeholders. In particular, our
contributions include documents used to characterize XAI methods and user
requirements (shown in Appendix), which our methodology builds upon.
Related papers
- Explainability in AI Based Applications: A Framework for Comparing Different Techniques [2.5874041837241304]
In business applications, the challenge lies in selecting an appropriate explainability method that balances comprehensibility with accuracy.
This paper proposes a novel method for the assessment of the agreement of different explainability techniques.
By providing a practical framework for understanding the agreement of diverse explainability techniques, our research aims to facilitate the broader integration of interpretable AI systems in business applications.
arXiv Detail & Related papers (2024-10-28T09:45:34Z) - Systematic Mapping Protocol -- UX Design role in software development
process [55.2480439325792]
We present a systematic mapping protocol for investigating the role of the UX designer in the software development process.
We define the research questions, scope, sources, search strategy, selection criteria, data extraction, and analysis methods that we will use to conduct the mapping study.
arXiv Detail & Related papers (2024-02-20T16:56:46Z) - Combatting Human Trafficking in the Cyberspace: A Natural Language
Processing-Based Methodology to Analyze the Language in Online Advertisements [55.2480439325792]
This project tackles the pressing issue of human trafficking in online C2C marketplaces through advanced Natural Language Processing (NLP) techniques.
We introduce a novel methodology for generating pseudo-labeled datasets with minimal supervision, serving as a rich resource for training state-of-the-art NLP models.
A key contribution is the implementation of an interpretability framework using Integrated Gradients, providing explainable insights crucial for law enforcement.
arXiv Detail & Related papers (2023-11-22T02:45:01Z) - A Taxonomy of Decentralized Identifier Methods for Practitioners [50.76687001060655]
A core part of the new identity management paradigm of Self-Sovereign Identity (SSI) is the W3C Decentralized Identifiers (DIDs) standard.
We propose a taxonomy of DID methods with the goal to empower practitioners to make informed decisions when selecting DID methods.
arXiv Detail & Related papers (2023-10-18T13:01:40Z) - Understanding User Preferences in Explainable Artificial Intelligence: A Survey and a Mapping Function Proposal [0.0]
This study conducts a thorough review of extant research in Explainable Machine Learning (XML)
Our main objective is to offer a classification of XAI methods within the realm of XML.
We propose a mapping function that take to account users and their desired properties and suggest an XAI method to them.
arXiv Detail & Related papers (2023-02-07T01:06:38Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - Explainable Deep Reinforcement Learning: State of the Art and Challenges [1.005130974691351]
Interpretability, explainability and transparency are key issues to introducing Artificial Intelligence methods in many critical domains.
This article provides a review of state of the art methods for explainable deep reinforcement learning methods.
arXiv Detail & Related papers (2023-01-24T11:41:25Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Individual Explanations in Machine Learning Models: A Case Study on
Poverty Estimation [63.18666008322476]
Machine learning methods are being increasingly applied in sensitive societal contexts.
The present case study has two main objectives. First, to expose these challenges and how they affect the use of relevant and novel explanations methods.
And second, to present a set of strategies that mitigate such challenges, as faced when implementing explanation methods in a relevant application domain.
arXiv Detail & Related papers (2021-04-09T01:54:58Z) - Counterfactual Explanations for Machine Learning: A Review [5.908471365011942]
We review and categorize research on counterfactual explanations in machine learning.
Modern approaches to counterfactual explainability in machine learning draw connections to the established legal doctrine in many countries.
arXiv Detail & Related papers (2020-10-20T20:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.