Understanding the Unforeseen via the Intentional Stance
- URL: http://arxiv.org/abs/2211.00478v1
- Date: Tue, 1 Nov 2022 14:14:14 GMT
- Title: Understanding the Unforeseen via the Intentional Stance
- Authors: Stephanie Stacy, Alfredo Gabaldon, John Karigiannis, James Kubrich,
Peter Tu
- Abstract summary: We present an architecture and system for understanding novel behaviors of an observed agent.
The two main features of our approach are the adoption of Dennett's intentional stance and analogical reasoning.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present an architecture and system for understanding novel behaviors of an
observed agent. The two main features of our approach are the adoption of
Dennett's intentional stance and analogical reasoning as one of the main
computational mechanisms for understanding unforeseen experiences. Our approach
uses analogy with past experiences to construct hypothetical rationales that
explain the behavior of an observed agent. Moreover, we view analogies as
partial; thus multiple past experiences can be blended to analogically explain
an unforeseen event, leading to greater inferential flexibility. We argue that
this approach results in more meaningful explanations of observed behavior than
approaches based on surface-level comparisons. A key advantage of behavior
explanation over classification is the ability to i) take appropriate responses
based on reasoning and ii) make non-trivial predictions that allow for the
verification of the hypothesized explanation. We provide a simple use case to
demonstrate novel experience understanding through analogy in a gas station
environment.
Related papers
- Toward Understanding In-context vs. In-weight Learning [50.24035812301655]
We identify simplified distributional properties that give rise to the emergence and disappearance of in-context learning.
We then extend the study to a full large language model, showing how fine-tuning on various collections of natural language prompts can elicit similar in-context and in-weight learning behaviour.
arXiv Detail & Related papers (2024-10-30T14:09:00Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Abductive Commonsense Reasoning Exploiting Mutually Exclusive
Explanations [118.0818807474809]
Abductive reasoning aims to find plausible explanations for an event.
Existing approaches for abductive reasoning in natural language processing often rely on manually generated annotations for supervision.
This work proposes an approach for abductive commonsense reasoning that exploits the fact that only a subset of explanations is correct for a given context.
arXiv Detail & Related papers (2023-05-24T01:35:10Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z) - Visual Abductive Reasoning [85.17040703205608]
Abductive reasoning seeks the likeliest possible explanation for partial observations.
We propose a new task and dataset, Visual Abductive Reasoning ( VAR), for examining abductive reasoning ability of machine intelligence in everyday visual situations.
arXiv Detail & Related papers (2022-03-26T10:17:03Z) - Towards Relatable Explainable AI with the Perceptual Process [5.581885362337179]
We argue that explanations must be more relatable to other concepts, hypotheticals, and associations.
Inspired by cognitive psychology, we propose the XAI Perceptual Processing Framework and RexNet model for relatable explainable AI.
arXiv Detail & Related papers (2021-12-28T05:48:53Z) - Observing Interventions: A logic for thinking about experiments [62.997667081978825]
This paper makes a first step towards a logic of learning from experiments.
Crucial for our approach is the idea that the notion of an intervention can be used as a formal expression of a (real or hypothetical) experiment.
For all the proposed logical systems, we provide a sound and complete axiomatization.
arXiv Detail & Related papers (2021-11-25T09:26:45Z) - A Description Logic for Analogical Reasoning [28.259681405091666]
We present a mechanism to infer plausible missing knowledge, which relies on reasoning by analogy.
This is the first paper that studies analog reasoning within the setting of description logic.
arXiv Detail & Related papers (2021-05-10T19:06:07Z) - Analogy as Nonparametric Bayesian Inference over Relational Systems [10.736626320566705]
We propose a Bayesian model that generalizes relational knowledge to novel environments by analogically weighting predictions from previously encountered relational structures.
We show that this learner outperforms a naive, theory-based learner on relational data derived from random- and Wikipedia-based systems when experience with the environment is small.
arXiv Detail & Related papers (2020-06-07T14:07:46Z) - Towards Analogy-Based Explanations in Machine Learning [3.1410342959104725]
We argue that analogical reasoning is not less interesting from an interpretability and explainability point of view.
An analogy-based approach is a viable alternative to existing approaches in the realm of explainable AI and interpretable machine learning.
arXiv Detail & Related papers (2020-05-23T06:41:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.