What Do We Want From Explainable Artificial Intelligence (XAI)? -- A
Stakeholder Perspective on XAI and a Conceptual Model Guiding
Interdisciplinary XAI Research
- URL: http://arxiv.org/abs/2102.07817v1
- Date: Mon, 15 Feb 2021 19:54:33 GMT
- Title: What Do We Want From Explainable Artificial Intelligence (XAI)? -- A
Stakeholder Perspective on XAI and a Conceptual Model Guiding
Interdisciplinary XAI Research
- Authors: Markus Langer, Daniel Oster, Timo Speith, Holger Hermanns, Lena
K\"astner, Eva Schmidt, Andreas Sesing, Kevin Baum
- Abstract summary: Main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems.
It often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata.
- Score: 0.8707090176854576
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Previous research in Explainable Artificial Intelligence (XAI) suggests that
a main aim of explainability approaches is to satisfy specific interests,
goals, expectations, needs, and demands regarding artificial systems (we call
these stakeholders' desiderata) in a variety of contexts. However, the
literature on XAI is vast, spreads out across multiple largely disconnected
disciplines, and it often remains unclear how explainability approaches are
supposed to achieve the goal of satisfying stakeholders' desiderata. This paper
discusses the main classes of stakeholders calling for explainability of
artificial systems and reviews their desiderata. We provide a model that
explicitly spells out the main concepts and relations necessary to consider and
investigate when evaluating, adjusting, choosing, and developing explainability
approaches that aim to satisfy stakeholders' desiderata. This model can serve
researchers from the variety of different disciplines involved in XAI as a
common ground. It emphasizes where there is interdisciplinary potential in the
evaluation and the development of explainability approaches.
Related papers
- Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction [5.417632175667161]
Explainable Artificial Intelligence (XAI) addresses challenges by providing explanations for how these models make decisions and predictions.
Existing studies have examined the fundamental concepts of XAI, its general principles, and the scope of XAI techniques.
This paper provides a comprehensive literature review encompassing common terminologies and definitions, the need for XAI, beneficiaries of XAI, a taxonomy of XAI methods, and the application of XAI methods in different application areas.
arXiv Detail & Related papers (2024-08-30T21:42:17Z) - Interdisciplinary Expertise to Advance Equitable Explainable AI [3.4195896673488395]
In this paper, we focus on explainable AI (XAI) and describe a framework for interdisciplinary expert panel review.
We emphasize the importance of the interdisciplinary expert panel to produce more accurate, equitable interpretations.
arXiv Detail & Related papers (2024-05-29T17:45:38Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Explainable artificial intelligence approaches for brain-computer
interfaces: a review and design space [6.786321327136925]
This review paper provides an integrated perspective of Explainable Artificial Intelligence techniques applied to Brain-Computer Interfaces.
Brain-Computer Interfaces use predictive models to interpret brain signals for various high-stake applications.
There is a lack of an integrated perspective in XAI for BCI literature.
arXiv Detail & Related papers (2023-12-20T13:56:31Z) - A Critical Survey on Fairness Benefits of Explainable AI [10.81142163495028]
We identify seven archetypal claims from 175 scientific articles on the alleged fairness benefits of XAI.
We notice that claims are often vague and simplistic, lacking normative grounding, or poorly aligned with the actual capabilities of XAI.
We suggest to conceive XAI not as an ethical panacea but as one of many tools to approach the multidimensional, sociotechnical challenge of algorithmic fairness.
arXiv Detail & Related papers (2023-10-15T08:17:45Z) - Predictable Artificial Intelligence [77.1127726638209]
This paper introduces the ideas and challenges of Predictable AI.
It explores the ways in which we can anticipate key validity indicators of present and future AI ecosystems.
We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems.
arXiv Detail & Related papers (2023-10-09T21:36:21Z) - A New Perspective on Evaluation Methods for Explainable Artificial
Intelligence (XAI) [0.0]
We argue that it is best approached in a nuanced way that incorporates resource availability, domain characteristics, and considerations of risk.
This work aims to advance the field of Requirements Engineering for AI.
arXiv Detail & Related papers (2023-07-26T15:15:44Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.