Tracking Emotions: Intrinsic Motivation Grounded on Multi-Level
Prediction Error Dynamics
- URL: http://arxiv.org/abs/2007.14632v1
- Date: Wed, 29 Jul 2020 06:53:13 GMT
- Title: Tracking Emotions: Intrinsic Motivation Grounded on Multi-Level
Prediction Error Dynamics
- Authors: Guido Schillaci and Alejandra Ciria and Bruno Lara
- Abstract summary: We discuss how emotions arise when differences between expected and actual rates of progress towards a goal are experienced.
We present an intrinsic motivation architecture that generates behaviors towards self-generated and dynamic goals.
- Score: 68.8204255655161
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: How do cognitive agents decide what is the relevant information to learn and
how goals are selected to gain this knowledge? Cognitive agents need to be
motivated to perform any action. We discuss that emotions arise when
differences between expected and actual rates of progress towards a goal are
experienced. Therefore, the tracking of prediction error dynamics has a tight
relationship with emotions. Here, we suggest that the tracking of prediction
error dynamics allows an artificial agent to be intrinsically motivated to seek
new experiences but constrained to those that generate reducible prediction
error.We present an intrinsic motivation architecture that generates behaviors
towards self-generated and dynamic goals and that regulates goal selection and
the balance between exploitation and exploration through multi-level monitoring
of prediction error dynamics. This new architecture modulates exploration noise
and leverages computational resources according to the dynamics of the overall
performance of the learning system. Additionally, it establishes a possible
solution to the temporal dynamics of goal selection. The results of the
experiments presented here suggest that this architecture outperforms intrinsic
motivation approaches where exploratory noise and goals are fixed and a greedy
strategy is applied.
Related papers
- Interpretable Imitation Learning with Dynamic Causal Relations [65.18456572421702]
We propose to expose captured knowledge in the form of a directed acyclic causal graph.
We also design this causal discovery process to be state-dependent, enabling it to model the dynamics in latent causal graphs.
The proposed framework is composed of three parts: a dynamic causal discovery module, a causality encoding module, and a prediction module, and is trained in an end-to-end manner.
arXiv Detail & Related papers (2023-09-30T20:59:42Z) - Learning Goal-based Movement via Motivational-based Models in Cognitive
Mobile Robots [58.720142291102135]
Humans have needs motivating their behavior according to intensity and context.
We also create preferences associated with each action's perceived pleasure, which is susceptible to changes over time.
This makes decision-making more complex, requiring learning to balance needs and preferences according to the context.
arXiv Detail & Related papers (2023-02-20T04:52:24Z) - Modelling human logical reasoning process in dynamic environmental
stress with cognitive agents [13.171768256928509]
We propose a cognitive agent integrating drift-diffusion with deep reinforcement learning to simulate granular stress effects on logical reasoning process.
Leveraging a large dataset of 21,157 logical responses, we investigate performance impacts of dynamic stress.
Quantitatively, the framework improves cognition modelling by capturing both subject-specific and stimuli-specific behavioural differences.
Overall, this work demonstrates a powerful, data-driven methodology to simulate and understand the vagaries of human logical reasoning process in dynamic contexts.
arXiv Detail & Related papers (2023-01-15T23:46:37Z) - Intrinsic Motivation in Dynamical Control Systems [5.635628182420597]
We investigate an information-theoretic approach to intrinsic motivation, based on maximizing an agent's empowerment.
We show that this approach generalizes previous attempts to formalize intrinsic motivation.
This opens the door for designing practical artificial, intrinsically motivated controllers.
arXiv Detail & Related papers (2022-12-29T05:20:08Z) - Inference of Affordances and Active Motor Control in Simulated Agents [0.5161531917413706]
We introduce an output-probabilistic, temporally predictive, modular artificial neural network architecture.
We show that our architecture develops latent states that can be interpreted as affordance maps.
In combination with active inference, we show that flexible, goal-directed behavior can be invoked.
arXiv Detail & Related papers (2022-02-23T14:13:04Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - Understanding the origin of information-seeking exploration in
probabilistic objectives for control [62.997667081978825]
An exploration-exploitation trade-off is central to the description of adaptive behaviour.
One approach to solving this trade-off has been to equip or propose that agents possess an intrinsic 'exploratory drive'
We show that this combination of utility maximizing and information-seeking behaviour arises from the minimization of an entirely difference class of objectives.
arXiv Detail & Related papers (2021-03-11T18:42:39Z) - Learning intuitive physics and one-shot imitation using
state-action-prediction self-organizing maps [0.0]
Humans learn by exploration and imitation, build causal models of the world, and use both to flexibly solve new tasks.
We suggest a simple but effective unsupervised model which develops such characteristics.
We demonstrate its performance on a set of several related, but different one-shot imitation tasks, which the agent flexibly solves in an active inference style.
arXiv Detail & Related papers (2020-07-03T12:29:11Z) - Intrinsic Motivation for Encouraging Synergistic Behavior [55.10275467562764]
We study the role of intrinsic motivation as an exploration bias for reinforcement learning in sparse-reward synergistic tasks.
Our key idea is that a good guiding principle for intrinsic motivation in synergistic tasks is to take actions which affect the world in ways that would not be achieved if the agents were acting on their own.
arXiv Detail & Related papers (2020-02-12T19:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.