Inference of Affordances and Active Motor Control in Simulated Agents
- URL: http://arxiv.org/abs/2202.11532v1
- Date: Wed, 23 Feb 2022 14:13:04 GMT
- Title: Inference of Affordances and Active Motor Control in Simulated Agents
- Authors: Fedor Scholz, Christian Gumbsch, Sebastian Otte, Martin V. Butz
- Abstract summary: We introduce an output-probabilistic, temporally predictive, modular artificial neural network architecture.
We show that our architecture develops latent states that can be interpreted as affordance maps.
In combination with active inference, we show that flexible, goal-directed behavior can be invoked.
- Score: 0.5161531917413706
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Flexible, goal-directed behavior is a fundamental aspect of human life. Based
on the free energy minimization principle, the theory of active inference
formalizes the generation of such behavior from a computational neuroscience
perspective. Based on the theory, we introduce an output-probabilistic,
temporally predictive, modular artificial neural network architecture, which
processes sensorimotor information, infers behavior-relevant aspects of its
world, and invokes highly flexible, goal-directed behavior. We show that our
architecture, which is trained end-to-end to minimize an approximation of free
energy, develops latent states that can be interpreted as affordance maps. That
is, the emerging latent states signal which actions lead to which effects
dependent on the local context. In combination with active inference, we show
that flexible, goal-directed behavior can be invoked, incorporating the
emerging affordance maps. As a result, our simulated agent flexibly steers
through continuous spaces, avoids collisions with obstacles, and prefers
pathways that lead to the goal with high certainty. Additionally, we show that
the learned agent is highly suitable for zero-shot generalization across
environments: After training the agent in a handful of fixed environments with
obstacles and other terrains affecting its behavior, it performs similarly well
in procedurally generated environments containing different amounts of
obstacles and terrains of various sizes at different locations. To improve and
focus model learning further, we plan to invoke active inference-based,
information-gain-oriented behavior also while learning the temporally
predictive model itself in the near future. Moreover, we intend to foster the
development of both deeper event-predictive abstractions and compact, habitual
behavioral primitives.
Related papers
- Dynamic planning in hierarchical active inference [0.0]
We refer to the ability of the human brain to infer and impose motor trajectories related to cognitive decisions.
This study focuses on the topic of dynamic planning in active inference.
arXiv Detail & Related papers (2024-02-18T17:32:53Z) - Multi-Agent Dynamic Relational Reasoning for Social Robot Navigation [50.01551945190676]
Social robot navigation can be helpful in various contexts of daily life but requires safe human-robot interactions and efficient trajectory planning.
We propose a systematic relational reasoning approach with explicit inference of the underlying dynamically evolving relational structures.
We demonstrate its effectiveness for multi-agent trajectory prediction and social robot navigation.
arXiv Detail & Related papers (2024-01-22T18:58:22Z) - Active Inference and Intentional Behaviour [40.19132448481507]
Recent advances in theoretical biology suggest that basal cognition and sentient behaviour are emergent properties of in vitro cell cultures and neuronal networks.
We characterise this kind of self-organisation through the lens of the free energy principle, i.e., as self-evidencing.
We investigate these forms of (reactive, sentient, and intentional) behaviour using simulations.
arXiv Detail & Related papers (2023-12-06T09:38:35Z) - Interactive Autonomous Navigation with Internal State Inference and
Interactivity Estimation [58.21683603243387]
We propose three auxiliary tasks with relational-temporal reasoning and integrate them into the standard Deep Learning framework.
These auxiliary tasks provide additional supervision signals to infer the behavior patterns other interactive agents.
Our approach achieves robust and state-of-the-art performance in terms of standard evaluation metrics.
arXiv Detail & Related papers (2023-11-27T18:57:42Z) - Interpretable Imitation Learning with Dynamic Causal Relations [65.18456572421702]
We propose to expose captured knowledge in the form of a directed acyclic causal graph.
We also design this causal discovery process to be state-dependent, enabling it to model the dynamics in latent causal graphs.
The proposed framework is composed of three parts: a dynamic causal discovery module, a causality encoding module, and a prediction module, and is trained in an end-to-end manner.
arXiv Detail & Related papers (2023-09-30T20:59:42Z) - A Neural Active Inference Model of Perceptual-Motor Learning [62.39667564455059]
The active inference framework (AIF) is a promising new computational framework grounded in contemporary neuroscience.
In this study, we test the ability for the AIF to capture the role of anticipation in the visual guidance of action in humans.
We present a novel formulation of the prior function that maps a multi-dimensional world-state to a uni-dimensional distribution of free-energy.
arXiv Detail & Related papers (2022-11-16T20:00:38Z) - Contrastive Active Inference [12.361539023886161]
We propose a contrastive objective for active inference that reduces the computational burden in learning the agent's generative model and planning future actions.
Our method performs notably better than likelihood-based active inference in image-based tasks, while also being computationally cheaper and easier to train.
arXiv Detail & Related papers (2021-10-19T16:20:49Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - Tracking Emotions: Intrinsic Motivation Grounded on Multi-Level
Prediction Error Dynamics [68.8204255655161]
We discuss how emotions arise when differences between expected and actual rates of progress towards a goal are experienced.
We present an intrinsic motivation architecture that generates behaviors towards self-generated and dynamic goals.
arXiv Detail & Related papers (2020-07-29T06:53:13Z) - Learning intuitive physics and one-shot imitation using
state-action-prediction self-organizing maps [0.0]
Humans learn by exploration and imitation, build causal models of the world, and use both to flexibly solve new tasks.
We suggest a simple but effective unsupervised model which develops such characteristics.
We demonstrate its performance on a set of several related, but different one-shot imitation tasks, which the agent flexibly solves in an active inference style.
arXiv Detail & Related papers (2020-07-03T12:29:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.