Social Construction of XAI: Do We Need One Definition to Rule Them All?
- URL: http://arxiv.org/abs/2211.06499v1
- Date: Fri, 11 Nov 2022 22:32:26 GMT
- Title: Social Construction of XAI: Do We Need One Definition to Rule Them All?
- Authors: Upol Ehsan, Mark O. Riedl
- Abstract summary: We argue why a singular definition of XAI is neither feasible nor desirable at this stage of XAI's development.
Forcing a standardization (closure) on the pluralistic interpretations too early can stifle innovation and lead to premature conclusions.
We share how we can leverage the pluralism to make progress in XAI without having to wait for a definitional consensus.
- Score: 18.14698948294366
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: There is a growing frustration amongst researchers and developers in
Explainable AI (XAI) around the lack of consensus around what is meant by
'explainability'. Do we need one definition of explainability to rule them all?
In this paper, we argue why a singular definition of XAI is neither feasible
nor desirable at this stage of XAI's development. We view XAI through the
lenses of Social Construction of Technology (SCOT) to explicate how diverse
stakeholders (relevant social groups) have different interpretations
(interpretative flexibility) that shape the meaning of XAI. Forcing a
standardization (closure) on the pluralistic interpretations too early can
stifle innovation and lead to premature conclusions. We share how we can
leverage the pluralism to make progress in XAI without having to wait for a
definitional consensus.
Related papers
- Dataset | Mindset = Explainable AI | Interpretable AI [36.001670039529586]
"explainable" Artificial Intelligence (XAI)" and "interpretable AI (IAI)" interchangeably when we apply various XAI tools for a given dataset to explain the reasons that underpin machine learning (ML) outputs.
We argue that XAI is a subset of IAI. The concept of IAI is beyond the sphere of a dataset. It includes the domain of a mindset.
We aim to clarify these notions and lay the foundation of XAI, IAI, EAI, and TAI for many practitioners and policymakers in future AI applications and research.
arXiv Detail & Related papers (2024-08-22T14:12:53Z) - Axe the X in XAI: A Plea for Understandable AI [0.0]
I argue that the notion of explainability as it is currently used in the XAI literature bears little resemblance to the traditional concept of scientific explanation.
It would be more fruitful to use the label "understandable AI" to avoid the confusion that surrounds the goal and purposes of XAI.
arXiv Detail & Related papers (2024-03-01T06:28:53Z) - Does Explainable AI Have Moral Value? [0.0]
Explainable AI (XAI) aims to bridge the gap between complex algorithmic systems and human stakeholders.
Current discourse often examines XAI in isolation as either a technological tool, user interface, or policy mechanism.
This paper proposes a unifying ethical framework grounded in moral duties and the concept of reciprocity.
arXiv Detail & Related papers (2023-11-05T15:59:27Z) - Understanding Natural Language Understanding Systems. A Critical
Analysis [91.81211519327161]
The development of machines that guillemotlefttalk like usguillemotright, also known as Natural Language Understanding (NLU) systems, is the Holy Grail of Artificial Intelligence (AI)
But never has the trust that we can build guillemotlefttalking machinesguillemotright been stronger than the one engendered by the last generation of NLU systems.
Are we at the dawn of a new era, in which the Grail is finally closer to us?
arXiv Detail & Related papers (2023-03-01T08:32:55Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - A Means-End Account of Explainable Artificial Intelligence [0.0]
XAI seeks opaque explanations for machine learning methods.
Authors disagree on what should be explained (topic), whom something should be explained (stakeholder), how something should be explained (instrument), and why something should be explained (goal)
arXiv Detail & Related papers (2022-08-09T09:57:42Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - Diagnosing AI Explanation Methods with Folk Concepts of Behavior [70.10183435379162]
We consider "success" to depend not only on what information the explanation contains, but also on what information the human explainee understands from it.
We use folk concepts of behavior as a framework of social attribution by the human explainee.
arXiv Detail & Related papers (2022-01-27T00:19:41Z) - Making Things Explainable vs Explaining: Requirements and Challenges
under the GDPR [2.578242050187029]
ExplanatorY AI (YAI) builds over XAI with the goal to collect and organize explainable information.
We represent the problem of generating explanations for Automated Decision-Making systems (ADMs) into the identification of an appropriate path over an explanatory space.
arXiv Detail & Related papers (2021-10-02T08:48:47Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.