VitrAI -- Applying Explainable AI in the Real World
- URL: http://arxiv.org/abs/2102.06518v1
- Date: Fri, 12 Feb 2021 13:44:39 GMT
- Title: VitrAI -- Applying Explainable AI in the Real World
- Authors: Marc Hanussek, Falko K\"otter, Maximilien Kintz, Jens Drawehn
- Abstract summary: VitrAI is a web-based service with the goal of uniformly demonstrating four different XAI algorithms in the context of three real life scenarios.
This work reveals practical obstacles when adopting XAI methods and gives qualitative estimates on how well different approaches perform in said scenarios.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With recent progress in the field of Explainable Artificial Intelligence
(XAI) and increasing use in practice, the need for an evaluation of different
XAI methods and their explanation quality in practical usage scenarios arises.
For this purpose, we present VitrAI, which is a web-based service with the goal
of uniformly demonstrating four different XAI algorithms in the context of
three real life scenarios and evaluating their performance and
comprehensibility for humans. This work reveals practical obstacles when
adopting XAI methods and gives qualitative estimates on how well different
approaches perform in said scenarios.
Related papers
- Study on the Helpfulness of Explainable Artificial Intelligence [0.0]
Legal, business, and ethical requirements motivate using effective XAI.
We propose to evaluate XAI methods via the user's ability to successfully perform a proxy task.
In other words, we address the helpfulness of XAI for human decision-making.
arXiv Detail & Related papers (2024-10-14T14:03:52Z) - Beyond the Veil of Similarity: Quantifying Semantic Continuity in Explainable AI [1.628012064605754]
We introduce a novel metric for measuring semantic continuity in Explainable AI methods and machine learning models.
We conduct experiments to observe how incremental changes in input affect the explanations provided by different XAI methods.
arXiv Detail & Related papers (2024-07-17T18:32:41Z) - How much informative is your XAI? A decision-making assessment task to
objectively measure the goodness of explanations [53.01494092422942]
The number and complexity of personalised and user-centred approaches to XAI have rapidly grown in recent years.
It emerged that user-centred approaches to XAI positively affect the interaction between users and systems.
We propose an assessment task to objectively and quantitatively measure the goodness of XAI systems.
arXiv Detail & Related papers (2023-12-07T15:49:39Z) - An Experimental Investigation into the Evaluation of Explainability
Methods [60.54170260771932]
This work compares 14 different metrics when applied to nine state-of-the-art XAI methods and three dummy methods (e.g., random saliency maps) used as references.
Experimental results show which of these metrics produces highly correlated results, indicating potential redundancy.
arXiv Detail & Related papers (2023-05-25T08:07:07Z) - Attribution-based XAI Methods in Computer Vision: A Review [5.076419064097734]
We provide a comprehensive survey of attribution-based XAI methods in computer vision.
We review the existing literature for gradient-based, perturbation-based, and contrastive methods for XAI.
arXiv Detail & Related papers (2022-11-27T05:56:36Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - INTERACTION: A Generative XAI Framework for Natural Language Inference
Explanations [58.062003028768636]
Current XAI approaches only focus on delivering a single explanation.
This paper proposes a generative XAI framework, INTERACTION (explaIn aNd predicT thEn queRy with contextuAl CondiTional varIational autO-eNcoder)
Our novel framework presents explanation in two steps: (step one) Explanation and Label Prediction; and (step two) Diverse Evidence Generation.
arXiv Detail & Related papers (2022-09-02T13:52:39Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Introducing and assessing the explainable AI (XAI)method: SIDU [17.127282412294335]
We present a novel XAI visual explanation algorithm denoted SIDU that can effectively localize entire object regions.
We analyze its robustness and effectiveness through various computational and human subject experiments.
arXiv Detail & Related papers (2021-01-26T11:13:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.