Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI)
- URL: http://arxiv.org/abs/2211.00103v1
- Date: Mon, 31 Oct 2022 19:20:22 GMT
- Title: Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI)
- Authors: Muhammad Suffian, Muhammad Yaseen Khan, Alessandro Bogliolo
- Abstract summary: The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
- Score: 68.8204255655161
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainable Artificial Intelligence (XAI) has recently gained a swell of
interest, as many Artificial Intelligence (AI) practitioners and developers are
compelled to rationalize how such AI-based systems work. Decades back, most XAI
systems were developed as knowledge-based or expert systems. These systems
assumed reasoning for the technical description of an explanation, with little
regard for the user's cognitive capabilities. The emphasis of XAI research
appears to have turned to a more pragmatic explanation approach for better
understanding. An extensive area where cognitive science research may
substantially influence XAI advancements is evaluating user knowledge and
feedback, which are essential for XAI system evaluation. To this end, we
propose a framework to experiment with generating and evaluating the
explanations on the grounds of different cognitive levels of understanding. In
this regard, we adopt Bloom's taxonomy, a widely accepted model for assessing
the user's cognitive capability. We utilize the counterfactual explanations as
an explanation-providing medium encompassed with user feedback to validate the
levels of understanding about the explanation at each cognitive level and
improvise the explanation generation methods accordingly.
Related papers
- Study on the Helpfulness of Explainable Artificial Intelligence [0.0]
Legal, business, and ethical requirements motivate using effective XAI.
We propose to evaluate XAI methods via the user's ability to successfully perform a proxy task.
In other words, we address the helpfulness of XAI for human decision-making.
arXiv Detail & Related papers (2024-10-14T14:03:52Z) - Toward enriched Cognitive Learning with XAI [44.99833362998488]
We introduce an intelligent system (CL-XAI) for Cognitive Learning which is supported by artificial intelligence (AI) tools.
The use of CL-XAI is illustrated with a game-inspired virtual use case where learners tackle problems to enhance problem-solving skills.
arXiv Detail & Related papers (2023-12-19T16:13:47Z) - How much informative is your XAI? A decision-making assessment task to
objectively measure the goodness of explanations [53.01494092422942]
The number and complexity of personalised and user-centred approaches to XAI have rapidly grown in recent years.
It emerged that user-centred approaches to XAI positively affect the interaction between users and systems.
We propose an assessment task to objectively and quantitatively measure the goodness of XAI systems.
arXiv Detail & Related papers (2023-12-07T15:49:39Z) - Explainable Artificial Intelligence (XAI) for Increasing User Trust in
Deep Reinforcement Learning Driven Autonomous Systems [0.8701566919381223]
We offer an explainable artificial intelligence (XAI) framework that provides a three-fold explanation.
We created a user-interface for our XAI framework and evaluated its efficacy via a human-user experiment.
arXiv Detail & Related papers (2021-06-07T16:38:43Z) - Principles of Explanation in Human-AI Systems [0.7768952514701895]
Explainable Artificial Intelligence (XAI) has re-emerged in response to the development of modern AI and ML systems.
XAI systems are frequently algorithm-focused; starting and ending with an algorithm that implements a basic untested idea about explainability.
We propose an alternative: to start with human-focused principles for the design, testing, and implementation of XAI systems.
arXiv Detail & Related papers (2021-02-09T17:43:45Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Explainable Artificial Intelligence: a Systematic Review [2.741266294612776]
Machine learning has led to the development of highly accurate models but lack explainability and interpretability.
A plethora of methods to tackle this problem have been proposed, developed and tested.
This systematic review contributes to the body of knowledge by clustering these methods with a hierarchical classification system.
arXiv Detail & Related papers (2020-05-29T21:41:12Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.