Cognitive Perspectives on Context-based Decisions and Explanations
- URL: http://arxiv.org/abs/2101.10179v1
- Date: Mon, 25 Jan 2021 15:49:52 GMT
- Title: Cognitive Perspectives on Context-based Decisions and Explanations
- Authors: Marcus Westberg, Kary Fr\"amling
- Abstract summary: We show that the Contextual Importance and Utility method for XAI share an overlap with the current new wave of action-oriented predictive representational structures.
This has an influencing effect on explainable AI, where the goal is to provide explanations of computer decision-making for a human audience.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When human cognition is modeled in Philosophy and Cognitive Science, there is
a pervasive idea that humans employ mental representations in order to navigate
the world and make predictions about outcomes of future actions. By
understanding how these representational structures work, we not only
understand more about human cognition but also gain a better understanding for
how humans rationalise and explain decisions. This has an influencing effect on
explainable AI, where the goal is to provide explanations of computer
decision-making for a human audience. We show that the Contextual Importance
and Utility method for XAI share an overlap with the current new wave of
action-oriented predictive representational structures, in ways that makes CIU
a reliable tool for creating explanations that humans can relate to and trust.
Related papers
- Forms of Understanding of XAI-Explanations [2.887772793510463]
This article aims to present a model of forms of understanding in the context of Explainable Artificial Intelligence (XAI)
Two types of understanding are considered as possible outcomes of explanations, namely enabledness and comprehension.
Special challenges of understanding in XAI are discussed.
arXiv Detail & Related papers (2023-11-15T08:06:51Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Exploring Effectiveness of Explanations for Appropriate Trust: Lessons
from Cognitive Psychology [3.1945067016153423]
This work draws inspiration from findings in cognitive psychology to understand how effective explanations can be designed.
We identify four components to which explanation designers can pay special attention: perception, semantics, intent, and user & context.
We propose that the significant challenge for effective AI explanations is an additional step between explanation generation using algorithms not producing interpretable explanations and explanation communication.
arXiv Detail & Related papers (2022-10-05T13:40:01Z) - Machine Explanations and Human Understanding [31.047297225560566]
Explanations are hypothesized to improve human understanding of machine learning models.
empirical studies have found mixed and even negative results.
We show how human intuitions play a central role in enabling human understanding.
arXiv Detail & Related papers (2022-02-08T19:00:38Z) - Diagnosing AI Explanation Methods with Folk Concepts of Behavior [70.10183435379162]
We consider "success" to depend not only on what information the explanation contains, but also on what information the human explainee understands from it.
We use folk concepts of behavior as a framework of social attribution by the human explainee.
arXiv Detail & Related papers (2022-01-27T00:19:41Z) - Cognitive science as a source of forward and inverse models of human
decisions for robotics and control [13.502912109138249]
We look at how cognitive science can provide forward models of human decision-making.
We highlight approaches that synthesize blackbox and theory-driven modeling.
We aim to provide readers with a glimpse of the range of frameworks, methodologies, and actionable insights that lie at the intersection of cognitive science and control research.
arXiv Detail & Related papers (2021-09-01T00:28:28Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Human Evaluation of Interpretability: The Case of AI-Generated Music
Knowledge [19.508678969335882]
We focus on evaluating AI-discovered knowledge/rules in the arts and humanities.
We present an experimental procedure to collect and assess human-generated verbal interpretations of AI-generated music theory/rules rendered as sophisticated symbolic/numeric objects.
arXiv Detail & Related papers (2020-04-15T06:03:34Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.