Towards the Role of Theory of Mind in Explanation
- URL: http://arxiv.org/abs/2005.02963v1
- Date: Wed, 6 May 2020 17:13:46 GMT
- Title: Towards the Role of Theory of Mind in Explanation
- Authors: Maayan Shvo, Toryn Q. Klassen, Sheila A. McIlraith
- Abstract summary: Theory of Mind is the ability to attribute mental states (e.g., beliefs, goals) to oneself, and to others.
Previous work has observed that Theory of Mind capabilities are central to providing an explanation to another agent.
- Score: 23.818659473644505
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Theory of Mind is commonly defined as the ability to attribute mental states
(e.g., beliefs, goals) to oneself, and to others. A large body of previous work
- from the social sciences to artificial intelligence - has observed that
Theory of Mind capabilities are central to providing an explanation to another
agent or when explaining that agent's behaviour. In this paper, we build and
expand upon previous work by providing an account of explanation in terms of
the beliefs of agents and the mechanism by which agents revise their beliefs
given possible explanations. We further identify a set of desiderata for
explanations that utilize Theory of Mind. These desiderata inform our
belief-based account of explanation.
Related papers
- Grounding Language about Belief in a Bayesian Theory-of-Mind [5.058204320571824]
We take a step towards an answer by grounding the semantics of belief statements in a Bayesian theory-of-mind.
By modeling how humans jointly infer coherent sets of goals, beliefs, and plans, our framework provides a conceptual role semantics for belief.
We evaluate this framework by studying how humans attribute goals and beliefs while watching an agent solve a doors-and-keys gridworld puzzle.
arXiv Detail & Related papers (2024-02-16T02:47:09Z) - On Computational Mechanisms for Shared Intentionality, and Speculation
on Rationality and Consciousness [0.0]
A singular attribute of humankind is our ability to undertake novel, cooperative behavior, or teamwork.
This requires that we can communicate goals, plans, and ideas between the brains of individuals to create shared intentionality.
I derive necessary characteristics of basic mechanisms to enable shared intentionality between prelinguistic computational agents.
arXiv Detail & Related papers (2023-06-03T21:31:38Z) - Minding Language Models' (Lack of) Theory of Mind: A Plug-and-Play
Multi-Character Belief Tracker [72.09076317574238]
ToM is a plug-and-play approach to investigate the belief states of characters in reading comprehension.
We show that ToM enhances off-the-shelf neural network theory mind in a zero-order setting while showing robust out-of-distribution performance compared to supervised baselines.
arXiv Detail & Related papers (2023-06-01T17:24:35Z) - Memory-Augmented Theory of Mind Network [59.9781556714202]
Social reasoning requires the capacity of theory of mind (ToM) to contextualise and attribute mental states to others.
Recent machine learning approaches to ToM have demonstrated that we can train the observer to read the past and present behaviours of other agents.
We tackle the challenges by equipping the observer with novel neural memory mechanisms to encode, and hierarchical attention to selectively retrieve information about others.
This results in ToMMY, a theory of mind model that learns to reason while making little assumptions about the underlying mental processes.
arXiv Detail & Related papers (2023-01-17T14:48:58Z) - A psychological theory of explainability [5.715103211247915]
We propose a theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation.
Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give.
arXiv Detail & Related papers (2022-05-17T15:52:24Z) - A Quantitative Symbolic Approach to Individual Human Reasoning [0.0]
We take findings from the literature and show how these, formalized as cognitive principles within a logical framework, can establish a quantitative notion of reasoning.
We employ techniques from non-monotonic reasoning and computer science, namely, a solving paradigm called answer set programming (ASP)
Finally, we can fruitfully use plausibility reasoning in ASP to test the effects of an existing experiment and explain different majority responses.
arXiv Detail & Related papers (2022-05-10T16:43:47Z) - Learning Theory of Mind via Dynamic Traits Attribution [59.9781556714202]
We propose a new neural ToM architecture that learns to generate a latent trait vector of an actor from the past trajectories.
This trait vector then multiplicatively modulates the prediction mechanism via a fast weights' scheme in the prediction neural network.
We empirically show that the fast weights provide a good inductive bias to model the character traits of agents and hence improves mindreading ability.
arXiv Detail & Related papers (2022-04-17T11:21:18Z) - Diagnosing AI Explanation Methods with Folk Concepts of Behavior [70.10183435379162]
We consider "success" to depend not only on what information the explanation contains, but also on what information the human explainee understands from it.
We use folk concepts of behavior as a framework of social attribution by the human explainee.
arXiv Detail & Related papers (2022-01-27T00:19:41Z) - A Qualitative Theory of Cognitive Attitudes and their Change [8.417971913040066]
We show that it allows us to express a variety of relevant concepts for qualitative decision theory.
We also present two extensions of the logic, one by the notion of choice and the other by dynamic operators for belief change and desire change.
arXiv Detail & Related papers (2021-02-16T10:28:49Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.