Learning mental states estimation through self-observation: a developmental synergy between intentions and beliefs representations in a deep-learning model of Theory of Mind
- URL: http://arxiv.org/abs/2407.18022v1
- Date: Thu, 25 Jul 2024 13:15:25 GMT
- Title: Learning mental states estimation through self-observation: a developmental synergy between intentions and beliefs representations in a deep-learning model of Theory of Mind
- Authors: Francesca Bianco, Silvia Rigato, Maria Laura Filippetti, Dimitri Ognibene,
- Abstract summary: Theory of Mind (ToM) is the ability to attribute beliefs, intentions, or mental states to others.
We show a developmental synergy between learning to predict low-level mental states and attributing high-level ones.
We propose that our computational approach can inform the understanding of human social cognitive development.
- Score: 0.35154948148425685
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Theory of Mind (ToM), the ability to attribute beliefs, intentions, or mental states to others, is a crucial feature of human social interaction. In complex environments, where the human sensory system reaches its limits, behaviour is strongly driven by our beliefs about the state of the world around us. Accessing others' mental states, e.g., beliefs and intentions, allows for more effective social interactions in natural contexts. Yet, these variables are not directly observable, making understanding ToM a challenging quest of interest for different fields, including psychology, machine learning and robotics. In this paper, we contribute to this topic by showing a developmental synergy between learning to predict low-level mental states (e.g., intentions, goals) and attributing high-level ones (i.e., beliefs). Specifically, we assume that learning beliefs attribution can occur by observing one's own decision processes involving beliefs, e.g., in a partially observable environment. Using a simple feed-forward deep learning model, we show that, when learning to predict others' intentions and actions, more accurate predictions can be acquired earlier if beliefs attribution is learnt simultaneously. Furthermore, we show that the learning performance improves even when observed actors have a different embodiment than the observer and the gain is higher when observing beliefs-driven chunks of behaviour. We propose that our computational approach can inform the understanding of human social cognitive development and be relevant for the design of future adaptive social robots able to autonomously understand, assist, and learn from human interaction partners in novel natural environments and tasks.
Related papers
- COKE: A Cognitive Knowledge Graph for Machine Theory of Mind [87.14703659509502]
Theory of mind (ToM) refers to humans' ability to understand and infer the desires, beliefs, and intentions of others.
COKE is the first cognitive knowledge graph for machine theory of mind.
arXiv Detail & Related papers (2023-05-09T12:36:58Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - Memory-Augmented Theory of Mind Network [59.9781556714202]
Social reasoning requires the capacity of theory of mind (ToM) to contextualise and attribute mental states to others.
Recent machine learning approaches to ToM have demonstrated that we can train the observer to read the past and present behaviours of other agents.
We tackle the challenges by equipping the observer with novel neural memory mechanisms to encode, and hierarchical attention to selectively retrieve information about others.
This results in ToMMY, a theory of mind model that learns to reason while making little assumptions about the underlying mental processes.
arXiv Detail & Related papers (2023-01-17T14:48:58Z) - Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs [77.88043871260466]
We show that one of today's largest language models lacks this kind of social intelligence out-of-the box.
We conclude that person-centric NLP approaches might be more effective towards neural Theory of Mind.
arXiv Detail & Related papers (2022-10-24T14:58:58Z) - Robot Learning Theory of Mind through Self-Observation: Exploiting the
Intentions-Beliefs Synergy [0.0]
Theory of Mind (TOM) is the ability to attribute to other agents' beliefs, intentions, or mental states in general.
We show the synergy between learning to predict low-level mental states, such as intentions and goals, and attributing high-level ones, such as beliefs.
We propose that our architectural approach can be relevant for the design of future adaptive social robots.
arXiv Detail & Related papers (2022-10-17T21:12:39Z) - The world seems different in a social context: a neural network analysis
of human experimental data [57.729312306803955]
We show that it is possible to replicate human behavioral data in both individual and social task settings by modifying the precision of prior and sensory signals.
An analysis of the neural activation traces of the trained networks provides evidence that information is coded in fundamentally different ways in the network in the individual and in the social conditions.
arXiv Detail & Related papers (2022-03-03T17:19:12Z) - Social Neuro AI: Social Interaction as the "dark matter" of AI [0.0]
We argue that empirical results from social psychology and social neuroscience along with the framework of dynamics can be of inspiration to the development of more intelligent artificial agents.
arXiv Detail & Related papers (2021-12-31T13:41:53Z) - AGENT: A Benchmark for Core Psychological Reasoning [60.35621718321559]
Intuitive psychology is the ability to reason about hidden mental variables that drive observable actions.
Despite recent interest in machine agents that reason about other agents, it is not clear if such agents learn or hold the core psychology principles that drive human reasoning.
We present a benchmark consisting of procedurally generated 3D animations, AGENT, structured around four scenarios.
arXiv Detail & Related papers (2021-02-24T14:58:23Z) - Towards hybrid primary intersubjectivity: a neural robotics library for
human science [4.232614032390374]
We study primary intersubjectivity as a second person perspective experience characterized by predictive engagement.
We propose an open-source methodology named textitneural robotics library (NRL) for experimental human-robot interaction.
We discuss some ways human-robot (hybrid) intersubjectivity can contribute to human science research.
arXiv Detail & Related papers (2020-06-29T11:35:46Z) - SensAI+Expanse Emotional Valence Prediction Studies with Cognition and
Memory Integration [0.0]
This work contributes with an artificial intelligent agent able to assist on cognitive science studies.
The developed artificial agent system (SensAI+Expanse) includes machine learning algorithms, empathetic algorithms, and memory.
Results of the present study show evidence of significant emotional behaviour differences between some age ranges and gender combinations.
arXiv Detail & Related papers (2020-01-03T18:17:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.