Integrating cognitive map learning and active inference for planning in
ambiguous environments
- URL: http://arxiv.org/abs/2308.08307v1
- Date: Wed, 16 Aug 2023 12:10:23 GMT
- Title: Integrating cognitive map learning and active inference for planning in
ambiguous environments
- Authors: Toon Van de Maele, Bart Dhoedt, Tim Verbelen, Giovanni Pezzulo
- Abstract summary: We propose the integration of a statistical model of cognitive map formation within an active inference agent that supports planning under uncertainty.
Specifically, we examine the clone-structured cognitive graph (CSCG) model of cognitive map formation and compare a naive clone graph agent with an active inference-driven clone graph agent.
Our findings demonstrate that while both agents are effective in simple scenarios, the active inference agent is more effective when planning in challenging scenarios, in which sensory observations provide ambiguous information about location.
- Score: 8.301959009586861
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Living organisms need to acquire both cognitive maps for learning the
structure of the world and planning mechanisms able to deal with the challenges
of navigating ambiguous environments. Although significant progress has been
made in each of these areas independently, the best way to integrate them is an
open research question. In this paper, we propose the integration of a
statistical model of cognitive map formation within an active inference agent
that supports planning under uncertainty. Specifically, we examine the
clone-structured cognitive graph (CSCG) model of cognitive map formation and
compare a naive clone graph agent with an active inference-driven clone graph
agent, in three spatial navigation scenarios. Our findings demonstrate that
while both agents are effective in simple scenarios, the active inference agent
is more effective when planning in challenging scenarios, in which sensory
observations provide ambiguous information about location.
Related papers
- Learning Where to Look: Self-supervised Viewpoint Selection for Active Localization using Geometrical Information [68.10033984296247]
This paper explores the domain of active localization, emphasizing the importance of viewpoint selection to enhance localization accuracy.
Our contributions involve using a data-driven approach with a simple architecture designed for real-time operation, a self-supervised data training method, and the capability to consistently integrate our map into a planning framework tailored for real-world robotics applications.
arXiv Detail & Related papers (2024-07-22T12:32:09Z) - Cognitive Evolutionary Learning to Select Feature Interactions for Recommender Systems [59.117526206317116]
We show that CELL can adaptively evolve into different models for different tasks and data.
Experiments on four real-world datasets demonstrate that CELL significantly outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2024-05-29T02:35:23Z) - Dynamic planning in hierarchical active inference [0.0]
We refer to the ability of the human brain to infer and impose motor trajectories related to cognitive decisions.
This study distances from traditional views centered on neural networks and reinforcement learning, and points toward a yet unexplored direction in active inference.
arXiv Detail & Related papers (2024-02-18T17:32:53Z) - Interactive Autonomous Navigation with Internal State Inference and
Interactivity Estimation [58.21683603243387]
We propose three auxiliary tasks with relational-temporal reasoning and integrate them into the standard Deep Learning framework.
These auxiliary tasks provide additional supervision signals to infer the behavior patterns other interactive agents.
Our approach achieves robust and state-of-the-art performance in terms of standard evaluation metrics.
arXiv Detail & Related papers (2023-11-27T18:57:42Z) - Spatio-Temporal Domain Awareness for Multi-Agent Collaborative
Perception [18.358998861454477]
Multi-agent collaborative perception as a potential application for vehicle-to-everything communication could significantly improve the performance perception of autonomous vehicles over single-agent perception.
We propose SCOPE, a novel collaborative perception framework that aggregates awareness characteristics across agents in an end-to-end manner.
arXiv Detail & Related papers (2023-07-26T03:00:31Z) - Active Sensing with Predictive Coding and Uncertainty Minimization [0.0]
We present an end-to-end procedure for embodied exploration inspired by two biological computations.
We first demonstrate our approach in a maze navigation task and show that it can discover the underlying transition distributions and spatial features of the environment.
We show that our model builds unsupervised representations through exploration that allow it to efficiently categorize visual scenes.
arXiv Detail & Related papers (2023-07-02T21:14:49Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - One-shot learning of paired association navigation with biologically plausible schemas [3.990406494980651]
Rodent one-shot learning in a multiple paired association navigation task has been postulated to be schema-dependent.
We compose an agent from schemas with biologically plausible neural implementations.
We show that schemas supplemented by an actor-critic allows the agent to succeed even if an obstacle prevents direct heading.
arXiv Detail & Related papers (2021-06-07T13:03:51Z) - Unsupervised Discriminative Embedding for Sub-Action Learning in Complex
Activities [54.615003524001686]
This paper proposes a novel approach for unsupervised sub-action learning in complex activities.
The proposed method maps both visual and temporal representations to a latent space where the sub-actions are learnt discriminatively.
We show that the proposed combination of visual-temporal embedding and discriminative latent concepts allow to learn robust action representations in an unsupervised setting.
arXiv Detail & Related papers (2021-04-30T20:07:27Z) - Language-guided Navigation via Cross-Modal Grounding and Alternate
Adversarial Learning [66.9937776799536]
The emerging vision-and-language navigation (VLN) problem aims at learning to navigate an agent to the target location in unseen photo-realistic environments.
The main challenges of VLN arise mainly from two aspects: first, the agent needs to attend to the meaningful paragraphs of the language instruction corresponding to the dynamically-varying visual environments.
We propose a cross-modal grounding module to equip the agent with a better ability to track the correspondence between the textual and visual modalities.
arXiv Detail & Related papers (2020-11-22T09:13:46Z) - Learning intuitive physics and one-shot imitation using
state-action-prediction self-organizing maps [0.0]
Humans learn by exploration and imitation, build causal models of the world, and use both to flexibly solve new tasks.
We suggest a simple but effective unsupervised model which develops such characteristics.
We demonstrate its performance on a set of several related, but different one-shot imitation tasks, which the agent flexibly solves in an active inference style.
arXiv Detail & Related papers (2020-07-03T12:29:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.