Explaining AI as an Exploratory Process: The Peircean Abduction Model
- URL: http://arxiv.org/abs/2009.14795v2
- Date: Thu, 1 Oct 2020 16:43:24 GMT
- Title: Explaining AI as an Exploratory Process: The Peircean Abduction Model
- Authors: Robert R. Hoffman, William J. Clancey, and Shane T. Mueller
- Abstract summary: Abductive inference has been defined in many ways.
Challenge of implementing abductive reasoning and the challenge of automating the explanation process are closely linked.
This analysis provides a theoretical framework for understanding what the XAI researchers are already doing.
- Score: 0.2676349883103404
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current discussions of "Explainable AI" (XAI) do not much consider the role
of abduction in explanatory reasoning (see Mueller, et al., 2018). It might be
worthwhile to pursue this, to develop intelligent systems that allow for the
observation and analysis of abductive reasoning and the assessment of abductive
reasoning as a learnable skill. Abductive inference has been defined in many
ways. For example, it has been defined as the achievement of insight. Most
often abduction is taken as a single, punctuated act of syllogistic reasoning,
like making a deductive or inductive inference from given premises. In
contrast, the originator of the concept of abduction---the American
scientist/philosopher Charles Sanders Peirce---regarded abduction as an
exploratory activity. In this regard, Peirce's insights about reasoning align
with conclusions from modern psychological research. Since abduction is often
defined as "inferring the best explanation," the challenge of implementing
abductive reasoning and the challenge of automating the explanation process are
closely linked. We explore these linkages in this report. This analysis
provides a theoretical framework for understanding what the XAI researchers are
already doing, it explains why some XAI projects are succeeding (or might
succeed), and it leads to design advice.
Related papers
- Understanding XAI Through the Philosopher's Lens: A Historical Perspective [5.839350214184222]
We show that a gradual progression has independently occurred in both domains from logicaldeductive to statistical models of explanation.
Similar concepts have independently emerged in both such as, for example, the relation between explanation and understanding and the importance of pragmatic factors.
arXiv Detail & Related papers (2024-07-26T14:44:49Z) - Axe the X in XAI: A Plea for Understandable AI [0.0]
I argue that the notion of explainability as it is currently used in the XAI literature bears little resemblance to the traditional concept of scientific explanation.
It would be more fruitful to use the label "understandable AI" to avoid the confusion that surrounds the goal and purposes of XAI.
arXiv Detail & Related papers (2024-03-01T06:28:53Z) - Implicit Chain of Thought Reasoning via Knowledge Distillation [58.80851216530288]
Instead of explicitly producing the chain of thought reasoning steps, we use the language model's internal hidden states to perform implicit reasoning.
We find that this approach enables solving tasks previously not solvable without explicit chain-of-thought, at a speed comparable to no chain-of-thought.
arXiv Detail & Related papers (2023-11-02T17:59:49Z) - Visual Abductive Reasoning [85.17040703205608]
Abductive reasoning seeks the likeliest possible explanation for partial observations.
We propose a new task and dataset, Visual Abductive Reasoning ( VAR), for examining abductive reasoning ability of machine intelligence in everyday visual situations.
arXiv Detail & Related papers (2022-03-26T10:17:03Z) - Diagnosing AI Explanation Methods with Folk Concepts of Behavior [70.10183435379162]
We consider "success" to depend not only on what information the explanation contains, but also on what information the human explainee understands from it.
We use folk concepts of behavior as a framework of social attribution by the human explainee.
arXiv Detail & Related papers (2022-01-27T00:19:41Z) - Towards Relatable Explainable AI with the Perceptual Process [5.581885362337179]
We argue that explanations must be more relatable to other concepts, hypotheticals, and associations.
Inspired by cognitive psychology, we propose the XAI Perceptual Processing Framework and RexNet model for relatable explainable AI.
arXiv Detail & Related papers (2021-12-28T05:48:53Z) - Observing Interventions: A logic for thinking about experiments [62.997667081978825]
This paper makes a first step towards a logic of learning from experiments.
Crucial for our approach is the idea that the notion of an intervention can be used as a formal expression of a (real or hypothetical) experiment.
For all the proposed logical systems, we provide a sound and complete axiomatization.
arXiv Detail & Related papers (2021-11-25T09:26:45Z) - Some Critical and Ethical Perspectives on the Empirical Turn of AI
Interpretability [0.0]
We consider two issues currently faced by Artificial Intelligence development: the lack of ethics and interpretability of AI decisions.
We experimentally show that the empirical and liberal turn of the production of explanations tends to select AI explanations with a low denunciatory power.
We propose two scenarios for the future development of ethical AI: more external regulation or more liberalization of AI explanations.
arXiv Detail & Related papers (2021-09-20T14:41:50Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Explainable AI without Interpretable Model [0.0]
It has become more important than ever that AI systems would be able to explain the reasoning behind their results to end-users.
Most Explainable AI (XAI) methods are based on extracting an interpretable model that can be used for producing explanations.
The notions of Contextual Importance and Utility (CIU) presented in this paper make it possible to produce human-like explanations of black-box outcomes directly.
arXiv Detail & Related papers (2020-09-29T13:29:44Z) - Machine Reasoning Explainability [100.78417922186048]
Machine Reasoning (MR) uses largely symbolic means to formalize and emulate abstract reasoning.
Studies in early MR have notably started inquiries into Explainable AI (XAI)
This document reports our work in-progress on MR explainability.
arXiv Detail & Related papers (2020-09-01T13:45:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.