Charting the Sociotechnical Gap in Explainable AI: A Framework to
Address the Gap in XAI
- URL: http://arxiv.org/abs/2302.00799v1
- Date: Wed, 1 Feb 2023 23:21:45 GMT
- Title: Charting the Sociotechnical Gap in Explainable AI: A Framework to
Address the Gap in XAI
- Authors: Upol Ehsan, Koustuv Saha, Munmun De Choudhury, Mark O. Riedl
- Abstract summary: We argue that charting the gap improves our problem understanding, which can reflexively provide actionable insights to improve explainability.
We empirically derive a framework that facilitates systematic charting of the sociotechnical gap.
By making conceptual and practical contributions to understanding the sociotechnical gap in XAI, the framework expands the XAI design space.
- Score: 29.33534897830558
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Explainable AI (XAI) systems are sociotechnical in nature; thus, they are
subject to the sociotechnical gap--divide between the technical affordances and
the social needs. However, charting this gap is challenging. In the context of
XAI, we argue that charting the gap improves our problem understanding, which
can reflexively provide actionable insights to improve explainability.
Utilizing two case studies in distinct domains, we empirically derive a
framework that facilitates systematic charting of the sociotechnical gap by
connecting AI guidelines in the context of XAI and elucidating how to use them
to address the gap. We apply the framework to a third case in a new domain,
showcasing its affordances. Finally, we discuss conceptual implications of the
framework, share practical considerations in its operationalization, and offer
guidance on transferring it to new contexts. By making conceptual and practical
contributions to understanding the sociotechnical gap in XAI, the framework
expands the XAI design space.
Related papers
- Beyond XAI:Obstacles Towards Responsible AI [0.0]
Methods of explainability and their evaluation strategies present numerous limitations in real-world contexts.
In this paper, we explore these limitations and discuss their implications in a boarder context of responsible AI.
arXiv Detail & Related papers (2023-09-07T11:08:14Z) - Is Task-Agnostic Explainable AI a Myth? [0.0]
Our work serves as a framework for unifying the challenges of contemporary explainable AI (XAI)
We demonstrate that while XAI methods provide supplementary and potentially useful output for machine learning models, researchers and decision-makers should be mindful of their conceptual and technical limitations.
We examine three XAI research avenues spanning image, textual, and graph data, covering saliency, attention, and graph-type explainers.
arXiv Detail & Related papers (2023-07-13T07:48:04Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Explainable Artificial Intelligence in Construction: The Content,
Context, Process, Outcome Evaluation Framework [1.3375143521862154]
We develop a content, context, process, and outcome evaluation framework that can be used to justify the adoption and effective management of XAI.
While our novel framework is conceptual, it provides a frame of reference for construction organisations to make headway toward realising XAI business value and benefits.
arXiv Detail & Related papers (2022-11-12T03:50:14Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - XAI Handbook: Towards a Unified Framework for Explainable AI [5.716475756970092]
The field of explainable AI (XAI) has quickly become a thriving and prolific community.
Each new contribution seems to rely on its own (and often intuitive) version of terms like "explanation" and "interpretation"
We propose a theoretical framework that not only provides concrete definitions for these terms, but it also outlines all steps necessary to produce explanations and interpretations.
arXiv Detail & Related papers (2021-05-14T07:28:21Z) - Expanding Explainability: Towards Social Transparency in AI systems [20.41177660318785]
Social Transparency (ST) is a socio-technically informed perspective that incorporates the socio-organizational context into explaining AI-mediated decision-making.
Our work contributes to the discourse of Human-Centered XAI by expanding the design space of XAI.
arXiv Detail & Related papers (2021-01-12T19:44:27Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.