Learning Theory of Mind via Dynamic Traits Attribution
- URL: http://arxiv.org/abs/2204.09047v1
- Date: Sun, 17 Apr 2022 11:21:18 GMT
- Title: Learning Theory of Mind via Dynamic Traits Attribution
- Authors: Dung Nguyen, Phuoc Nguyen, Hung Le, Kien Do, Svetha Venkatesh, Truyen
Tran
- Abstract summary: We propose a new neural ToM architecture that learns to generate a latent trait vector of an actor from the past trajectories.
This trait vector then multiplicatively modulates the prediction mechanism via a fast weights' scheme in the prediction neural network.
We empirically show that the fast weights provide a good inductive bias to model the character traits of agents and hence improves mindreading ability.
- Score: 59.9781556714202
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning of Theory of Mind (ToM) is essential to build social agents
that co-live with humans and other agents. This capacity, once acquired, will
help machines infer the mental states of others from observed contextual action
trajectories, enabling future prediction of goals, intention, actions and
successor representations. The underlying mechanism for such a prediction
remains unclear, however. Inspired by the observation that humans often infer
the character traits of others, then use it to explain behaviour, we propose a
new neural ToM architecture that learns to generate a latent trait vector of an
actor from the past trajectories. This trait vector then multiplicatively
modulates the prediction mechanism via a `fast weights' scheme in the
prediction neural network, which reads the current context and predicts the
behaviour. We empirically show that the fast weights provide a good inductive
bias to model the character traits of agents and hence improves mindreading
ability. On the indirect assessment of false-belief understanding, the new ToM
model enables more efficient helping behaviours.
Related papers
- Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - A Neural Active Inference Model of Perceptual-Motor Learning [62.39667564455059]
The active inference framework (AIF) is a promising new computational framework grounded in contemporary neuroscience.
In this study, we test the ability for the AIF to capture the role of anticipation in the visual guidance of action in humans.
We present a novel formulation of the prior function that maps a multi-dimensional world-state to a uni-dimensional distribution of free-energy.
arXiv Detail & Related papers (2022-11-16T20:00:38Z) - Developing hierarchical anticipations via neural network-based event
segmentation [14.059479351946386]
We model the development of hierarchical predictions via autonomously learned latent event codes.
We present a hierarchical recurrent neural network architecture, whose inductive learning biases foster the development of sparsely changing latent state.
A higher level network learns to predict the situations in which the latent states tend to change.
arXiv Detail & Related papers (2022-06-04T18:54:31Z) - A-ACT: Action Anticipation through Cycle Transformations [89.83027919085289]
We take a step back to analyze how the human capability to anticipate the future can be transferred to machine learning algorithms.
A recent study on human psychology explains that, in anticipating an occurrence, the human brain counts on both systems.
In this work, we study the impact of each system for the task of action anticipation and introduce a paradigm to integrate them in a learning framework.
arXiv Detail & Related papers (2022-04-02T21:50:45Z) - Rediscovering Affordance: A Reinforcement Learning Perspective [30.61766085961884]
We propose an integrative theory of affordance-formation based on the theory of reinforcement learning in cognitive sciences.
We implement this theory in a virtual robot model, which demonstrates human-like adaptation of affordance in interactive widgets tasks.
arXiv Detail & Related papers (2021-12-24T00:25:03Z) - AGENT: A Benchmark for Core Psychological Reasoning [60.35621718321559]
Intuitive psychology is the ability to reason about hidden mental variables that drive observable actions.
Despite recent interest in machine agents that reason about other agents, it is not clear if such agents learn or hold the core psychology principles that drive human reasoning.
We present a benchmark consisting of procedurally generated 3D animations, AGENT, structured around four scenarios.
arXiv Detail & Related papers (2021-02-24T14:58:23Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Attention or memory? Neurointerpretable agents in space and time [0.0]
We design a model incorporating a self-attention mechanism that implements task-state representations in semantic feature-space.
To evaluate the agent's selective properties, we add a large volume of task-irrelevant features to observations.
In line with neuroscience predictions, self-attention leads to increased robustness to noise compared to benchmark models.
arXiv Detail & Related papers (2020-07-09T15:04:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.