Responsibility: An Example-based Explainable AI approach via Training
Process Inspection
- URL: http://arxiv.org/abs/2209.03433v1
- Date: Wed, 7 Sep 2022 19:30:01 GMT
- Title: Responsibility: An Example-based Explainable AI approach via Training
Process Inspection
- Authors: Faraz Khadivpour, Arghasree Banerjee, Matthew Guzdial
- Abstract summary: We present a novel XAI approach that identifies the most responsible training example for a particular decision.
This example can then be shown as an explanation: "this is what I (the AI) learned that led me to do that"
Our results demonstrate that responsibility can help improve accuracy for both human end users and secondary ML models.
- Score: 1.4610038284393165
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainable Artificial Intelligence (XAI) methods are intended to help human
users better understand the decision making of an AI agent. However, many
modern XAI approaches are unintuitive to end users, particularly those without
prior AI or ML knowledge. In this paper, we present a novel XAI approach we
call Responsibility that identifies the most responsible training example for a
particular decision. This example can then be shown as an explanation: "this is
what I (the AI) learned that led me to do that". We present experimental
results across a number of domains along with the results of an Amazon
Mechanical Turk user study, comparing responsibility and existing XAI methods
on an image classification task. Our results demonstrate that responsibility
can help improve accuracy for both human end users and secondary ML models.
Related papers
- Study on the Helpfulness of Explainable Artificial Intelligence [0.0]
Legal, business, and ethical requirements motivate using effective XAI.
We propose to evaluate XAI methods via the user's ability to successfully perform a proxy task.
In other words, we address the helpfulness of XAI for human decision-making.
arXiv Detail & Related papers (2024-10-14T14:03:52Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - XAI for All: Can Large Language Models Simplify Explainable AI? [0.0699049312989311]
"x-[plAIn]" is a new approach to make XAI more accessible to a wider audience through a custom Large Language Model.
Our goal was to design a model that can generate clear, concise summaries of various XAI methods.
Results from our use-case studies show that our model is effective in providing easy-to-understand, audience-specific explanations.
arXiv Detail & Related papers (2024-01-23T21:47:12Z) - Towards Reconciling Usability and Usefulness of Explainable AI
Methodologies [2.715884199292287]
Black-box AI systems can lead to liability and accountability issues when they produce an incorrect decision.
Explainable AI (XAI) seeks to bridge the knowledge gap, between developers and end-users.
arXiv Detail & Related papers (2023-01-13T01:08:49Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Transcending XAI Algorithm Boundaries through End-User-Inspired Design [27.864338632191608]
Lacking explainability-focused functional support for end users may hinder the safe and responsible use of AI in high-stakes domains.
Our work shows that grounding the technical problem in end users' use of XAI can inspire new research questions.
Such end-user-inspired research questions have the potential to promote social good by democratizing AI and ensuring the responsible use of AI in critical domains.
arXiv Detail & Related papers (2022-08-18T09:44:51Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Explainable Artificial Intelligence Approaches: A Survey [0.22940141855172028]
Lack of explainability of a decision from an Artificial Intelligence based "black box" system/model is a key stumbling block for adopting AI in high stakes applications.
We demonstrate popular Explainable Artificial Intelligence (XAI) methods with a mutual case study/task.
We analyze for competitive advantages from multiple perspectives.
We recommend paths towards responsible or human-centered AI using XAI as a medium.
arXiv Detail & Related papers (2021-01-23T06:15:34Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.