Learning Goal-based Movement via Motivational-based Models in Cognitive
Mobile Robots
- URL: http://arxiv.org/abs/2302.09759v1
- Date: Mon, 20 Feb 2023 04:52:24 GMT
- Title: Learning Goal-based Movement via Motivational-based Models in Cognitive
Mobile Robots
- Authors: Let\'icia Berto, Paula Costa, Alexandre Sim\~oes, Ricardo Gudwin and
Esther Colombini
- Abstract summary: Humans have needs motivating their behavior according to intensity and context.
We also create preferences associated with each action's perceived pleasure, which is susceptible to changes over time.
This makes decision-making more complex, requiring learning to balance needs and preferences according to the context.
- Score: 58.720142291102135
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans have needs motivating their behavior according to intensity and
context. However, we also create preferences associated with each action's
perceived pleasure, which is susceptible to changes over time. This makes
decision-making more complex, requiring learning to balance needs and
preferences according to the context. To understand how this process works and
enable the development of robots with a motivational-based learning model, we
computationally model a motivation theory proposed by Hull. In this model, the
agent (an abstraction of a mobile robot) is motivated to keep itself in a state
of homeostasis. We added hedonic dimensions to see how preferences affect
decision-making, and we employed reinforcement learning to train our
motivated-based agents. We run three agents with energy decay rates
representing different metabolisms in two different environments to see the
impact on their strategy, movement, and behavior. The results show that the
agent learned better strategies in the environment that enables choices more
adequate according to its metabolism. The use of pleasure in the motivational
mechanism significantly impacted behavior learning, mainly for slow metabolism
agents. When survival is at risk, the agent ignores pleasure and equilibrium,
hinting at how to behave in harsh scenarios.
Related papers
- Deep Active Visual Attention for Real-time Robot Motion Generation:
Emergence of Tool-body Assimilation and Adaptive Tool-use [9.141661467673817]
This paper proposes a novel robot motion generation model, inspired by a human cognitive structure.
The model incorporates a state-driven active top-down visual attention module, which acquires attentions that can actively change targets based on task states.
The results suggested an improvement of flexibility in model's visual perception, which sustained stable attention and motion even if it was provided with untrained tools or exposed to experimenter's distractions.
arXiv Detail & Related papers (2022-06-29T10:55:32Z) - The Introspective Agent: Interdependence of Strategy, Physiology, and
Sensing for Embodied Agents [51.94554095091305]
We argue for an introspective agent, which considers its own abilities in the context of its environment.
Just as in nature, we hope to reframe strategy as one tool, among many, to succeed in an environment.
arXiv Detail & Related papers (2022-01-02T20:14:01Z) - Modelling Behaviour Change using Cognitive Agent Simulations [0.0]
This paper presents work-in-progress research to apply selected behaviour change theories to simulated agents.
The research is focusing on complex agent architectures required for self-determined goal achievement in adverse circumstances.
arXiv Detail & Related papers (2021-10-16T19:19:08Z) - Controlling the Sense of Agency in Dyadic Robot Interaction: An Active
Inference Approach [6.421670116083633]
We examine dyadic imitative interactions of robots using a variational recurrent neural network model.
We examined how regulating the complexity term to minimize free energy during training determines the dynamic characteristics of networks.
arXiv Detail & Related papers (2021-03-03T02:38:09Z) - AGENT: A Benchmark for Core Psychological Reasoning [60.35621718321559]
Intuitive psychology is the ability to reason about hidden mental variables that drive observable actions.
Despite recent interest in machine agents that reason about other agents, it is not clear if such agents learn or hold the core psychology principles that drive human reasoning.
We present a benchmark consisting of procedurally generated 3D animations, AGENT, structured around four scenarios.
arXiv Detail & Related papers (2021-02-24T14:58:23Z) - Learning Latent Representations to Influence Multi-Agent Interaction [65.44092264843538]
We propose a reinforcement learning-based framework for learning latent representations of an agent's policy.
We show that our approach outperforms the alternatives and learns to influence the other agent.
arXiv Detail & Related papers (2020-11-12T19:04:26Z) - Tracking Emotions: Intrinsic Motivation Grounded on Multi-Level
Prediction Error Dynamics [68.8204255655161]
We discuss how emotions arise when differences between expected and actual rates of progress towards a goal are experienced.
We present an intrinsic motivation architecture that generates behaviors towards self-generated and dynamic goals.
arXiv Detail & Related papers (2020-07-29T06:53:13Z) - Intrinsic Motivation for Encouraging Synergistic Behavior [55.10275467562764]
We study the role of intrinsic motivation as an exploration bias for reinforcement learning in sparse-reward synergistic tasks.
Our key idea is that a good guiding principle for intrinsic motivation in synergistic tasks is to take actions which affect the world in ways that would not be achieved if the agents were acting on their own.
arXiv Detail & Related papers (2020-02-12T19:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.