Argumentative XAI: A Survey
- URL: http://arxiv.org/abs/2105.11266v1
- Date: Mon, 24 May 2021 13:32:59 GMT
- Title: Argumentative XAI: A Survey
- Authors: Kristijonas \v{C}yras, Antonio Rago, Emanuele Albini, Pietro Baroni,
Francesca Toni
- Abstract summary: We overview XAI approaches built using methods from the field of computational argumentation.
We focus on different types of explanation (intrinsic and post-hoc), different models with which argumentation-based explanations are deployed, different forms of delivery, and different argumentation frameworks they use.
- Score: 15.294433619347082
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainable AI (XAI) has been investigated for decades and, together with AI
itself, has witnessed unprecedented growth in recent years. Among various
approaches to XAI, argumentative models have been advocated in both the AI and
social science literature, as their dialectical nature appears to match some
basic desirable features of the explanation activity. In this survey we
overview XAI approaches built using methods from the field of computational
argumentation, leveraging its wide array of reasoning abstractions and
explanation delivery methods. We overview the literature focusing on different
types of explanation (intrinsic and post-hoc), different models with which
argumentation-based explanations are deployed, different forms of delivery, and
different argumentation frameworks they use. We also lay out a roadmap for
future work.
Related papers
- An Ontology-Enabled Approach For User-Centered and Knowledge-Enabled Explanations of AI Systems [0.3480973072524161]
Recent research in explainability has focused on explaining the workings of AI models or model explainability.
This thesis seeks to bridge some gaps between model and user-centered explainability.
arXiv Detail & Related papers (2024-10-23T02:03:49Z) - Explaining Text Similarity in Transformer Models [52.571158418102584]
Recent advances in explainable AI have made it possible to mitigate limitations by leveraging improved explanations for Transformers.
We use BiLRP, an extension developed for computing second-order explanations in bilinear similarity models, to investigate which feature interactions drive similarity in NLP models.
Our findings contribute to a deeper understanding of different semantic similarity tasks and models, highlighting how novel explainable AI methods enable in-depth analyses and corpus-level insights.
arXiv Detail & Related papers (2024-05-10T17:11:31Z) - A Review on Explainable Artificial Intelligence for Healthcare: Why,
How, and When? [0.0]
We give a systematic analysis of explainable artificial intelligence (XAI)
The review analyzes the prevailing trends in XAI and lays out the major directions in which research is headed.
We present an explanation of how a trustworthy AI can be derived from describing AI models for healthcare fields.
arXiv Detail & Related papers (2023-04-10T17:40:21Z) - Black Box Model Explanations and the Human Interpretability Expectations -- An Analysis in the Context of Homicide Prediction [0.5898893619901381]
Strategies based on Explainable Artificial Intelligence (XAI) have promoted better human interpretability of the results of black box models.
This research addresses a real-world classification problem related to homicide prediction.
It used 6 different XAI methods to generate explanations and 6 different human experts.
arXiv Detail & Related papers (2022-10-19T19:23:48Z) - INTERACTION: A Generative XAI Framework for Natural Language Inference
Explanations [58.062003028768636]
Current XAI approaches only focus on delivering a single explanation.
This paper proposes a generative XAI framework, INTERACTION (explaIn aNd predicT thEn queRy with contextuAl CondiTional varIational autO-eNcoder)
Our novel framework presents explanation in two steps: (step one) Explanation and Label Prediction; and (step two) Diverse Evidence Generation.
arXiv Detail & Related papers (2022-09-02T13:52:39Z) - Causality-Inspired Taxonomy for Explainable Artificial Intelligence [10.241230325171143]
We propose a novel causality-inspired framework for xAI that creates an environment for the development of xAI approaches.
We have analysed 81 research papers on a myriad of biometric modalities and different tasks.
arXiv Detail & Related papers (2022-08-19T18:26:35Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Explanatory Pluralism in Explainable AI [0.0]
I chart a taxonomy of types of explanation and the associated XAI methods that can address them.
When we look to expose the inner mechanisms of AI models, we produce Diagnostic-explanations.
When we wish to form stable generalizations of our models, we produce Expectation-explanations.
Finally, when we want to justify the usage of a model, we produce Role-explanations.
arXiv Detail & Related papers (2021-06-26T09:02:06Z) - Machine Reasoning Explainability [100.78417922186048]
Machine Reasoning (MR) uses largely symbolic means to formalize and emulate abstract reasoning.
Studies in early MR have notably started inquiries into Explainable AI (XAI)
This document reports our work in-progress on MR explainability.
arXiv Detail & Related papers (2020-09-01T13:45:05Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.