Mind the gap: Challenges of deep learning approaches to Theory of Mind
- URL: http://arxiv.org/abs/2203.16540v1
- Date: Wed, 30 Mar 2022 15:48:05 GMT
- Title: Mind the gap: Challenges of deep learning approaches to Theory of Mind
- Authors: Jaan Aru, Aqeel Labash, Oriol Corcoll, Raul Vicente
- Abstract summary: Theory of Mind is an essential ability of humans to infer the mental states of others.
Here we provide a coherent summary of the potential, current progress, and problems of deep learning approaches to Theory of Mind.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Theory of Mind is an essential ability of humans to infer the mental states
of others. Here we provide a coherent summary of the potential, current
progress, and problems of deep learning approaches to Theory of Mind. We
highlight that many current findings can be explained through shortcuts. These
shortcuts arise because the tasks used to investigate Theory of Mind in deep
learning systems have been too narrow. Thus, we encourage researchers to
investigate Theory of Mind in complex open-ended environments. Furthermore, to
inspire future deep learning systems we provide a concise overview of prior
work done in humans. We further argue that when studying Theory of Mind with
deep learning, the research's main focus and contribution ought to be opening
up the network's representations. We recommend researchers use tools from the
field of interpretability of AI to study the relationship between different
network components and aspects of Theory of Mind.
Related papers
- Philosophy of Cognitive Science in the Age of Deep Learning [0.0]
Deep learning has enabled major advances across most areas of artificial intelligence research.
This perspective paper surveys key areas where their contributions can be especially fruitful.
arXiv Detail & Related papers (2024-05-07T06:39:47Z) - Think Twice: Perspective-Taking Improves Large Language Models'
Theory-of-Mind Capabilities [63.90227161974381]
SimToM is a novel prompting framework inspired by Simulation Theory's notion of perspective-taking.
Our approach, which requires no additional training and minimal prompt-tuning, shows substantial improvement over existing methods.
arXiv Detail & Related papers (2023-11-16T22:49:27Z) - Survey of Consciousness Theory from Computational Perspective [8.521492577054078]
This paper surveys several main branches of consciousness theories originating from different subjects.
It also discusses the existing evaluation metrics of consciousness and possibility for current computational models to be conscious.
arXiv Detail & Related papers (2023-09-18T18:23:58Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - Memory-Augmented Theory of Mind Network [59.9781556714202]
Social reasoning requires the capacity of theory of mind (ToM) to contextualise and attribute mental states to others.
Recent machine learning approaches to ToM have demonstrated that we can train the observer to read the past and present behaviours of other agents.
We tackle the challenges by equipping the observer with novel neural memory mechanisms to encode, and hierarchical attention to selectively retrieve information about others.
This results in ToMMY, a theory of mind model that learns to reason while making little assumptions about the underlying mental processes.
arXiv Detail & Related papers (2023-01-17T14:48:58Z) - Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs [77.88043871260466]
We show that one of today's largest language models lacks this kind of social intelligence out-of-the box.
We conclude that person-centric NLP approaches might be more effective towards neural Theory of Mind.
arXiv Detail & Related papers (2022-10-24T14:58:58Z) - On the link between conscious function and general intelligence in
humans and machines [0.9176056742068814]
We look at the cognitive abilities associated with three theories of conscious function.
We find that all three theories specifically relate conscious function to some aspect of domain-general intelligence in humans.
We propose ways in which insights from each of the three theories may be combined into a unified model.
arXiv Detail & Related papers (2022-03-24T02:22:23Z) - Interpretable Reinforcement Learning Inspired by Piaget's Theory of
Cognitive Development [1.7778609937758327]
This paper entertains the idea that theories such as language of thought hypothesis (LOTH), script theory, and Piaget's cognitive development theory provide complementary approaches.
The proposed framework can be viewed as a step towards achieving human-like cognition in artificial intelligent systems.
arXiv Detail & Related papers (2021-02-01T00:29:01Z) - Deep Learning and the Global Workspace Theory [0.0]
Recent advances in deep learning have allowed Artificial Intelligence to reach near human-level performance in many sensory, perceptual, linguistic or cognitive tasks.
There is a growing need, however, for novel, brain-inspired cognitive architectures.
The Global Workspace theory refers to a large-scale system integrating and distributing information among networks of specialized modules to create higher-level forms of cognition and awareness.
arXiv Detail & Related papers (2020-12-04T11:36:01Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike
Common Sense [142.53911271465344]
We argue that the next generation of AI must embrace "dark" humanlike common sense for solving novel tasks.
We identify functionality, physics, intent, causality, and utility (FPICU) as the five core domains of cognitive AI with humanlike common sense.
arXiv Detail & Related papers (2020-04-20T04:07:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.