Building spatial world models from sparse transitional episodic memories
- URL: http://arxiv.org/abs/2505.13696v1
- Date: Mon, 19 May 2025 19:56:24 GMT
- Title: Building spatial world models from sparse transitional episodic memories
- Authors: Zizhan He, Maxime Daigle, Pouya Bashivan,
- Abstract summary: We show that a neural network can learn to construct a spatial model of its surroundings from sparse and disjoint episodic memories.<n>We show that Episodic Spatial World Model (ESWM) is highly sample-efficient, requiring minimal observations to construct a robust representation of the environment.
- Score: 1.3459777108901956
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Many animals possess a remarkable capacity to rapidly construct flexible mental models of their environments. These world models are crucial for ethologically relevant behaviors such as navigation, exploration, and planning. The ability to form episodic memories and make inferences based on these sparse experiences is believed to underpin the efficiency and adaptability of these models in the brain. Here, we ask: Can a neural network learn to construct a spatial model of its surroundings from sparse and disjoint episodic memories? We formulate the problem in a simulated world and propose a novel framework, the Episodic Spatial World Model (ESWM), as a potential answer. We show that ESWM is highly sample-efficient, requiring minimal observations to construct a robust representation of the environment. It is also inherently adaptive, allowing for rapid updates when the environment changes. In addition, we demonstrate that ESWM readily enables near-optimal strategies for exploring novel environments and navigating between arbitrary points, all without the need for additional training.
Related papers
- Learning Local Causal World Models with State Space Models and Attention [1.5498250598583487]
We show that a SSM can model the dynamics of a simple environment and learn a causal model at the same time.<n>We pave the way for further experiments that lean into the strength of SSMs and further enhance them with causal awareness.
arXiv Detail & Related papers (2025-05-04T11:57:02Z) - World Model-Based Learning for Long-Term Age of Information Minimization in Vehicular Networks [53.98633183204453]
In this paper, a novel world model-based learning framework is proposed to minimize packet-completeness-aware age of information (CAoI) in a vehicular network.<n>A world model framework is proposed to jointly learn a dynamic model of the mmWave V2X environment and use it to imagine trajectories for learning how to perform link scheduling.<n>In particular, the long-term policy is learned in differentiable imagined trajectories instead of environment interactions.
arXiv Detail & Related papers (2025-05-03T06:23:18Z) - AdaWorld: Learning Adaptable World Models with Latent Actions [76.50869178593733]
We propose AdaWorld, an innovative world model learning approach that enables efficient adaptation.<n>Key idea is to incorporate action information during the pretraining of world models.<n>We then develop an autoregressive world model that conditions on these latent actions.
arXiv Detail & Related papers (2025-03-24T17:58:15Z) - Multimodal Dreaming: A Global Workspace Approach to World Model-Based Reinforcement Learning [2.5749046466046903]
In Reinforcement Learning (RL), world models aim to capture how the environment evolves in response to the agent's actions.<n>We show that performing the dreaming process inside the latent space allows for training with fewer environment steps.<n>We conclude that the combination of GW with World Models holds great potential for improving decision-making in RL agents.
arXiv Detail & Related papers (2025-02-28T15:24:17Z) - Learning Dynamic Cognitive Map with Autonomous Navigation [8.301959009586861]
We introduce a novel computational model to navigate and map a space rooted in biologically inspired principles.
Our model incorporates a dynamically expanding cognitive map over predicted poses within an Active Inference framework.
Our model achieves this without prior knowledge of observation and world dimensions, underscoring its robustness and efficacy in navigating intricate environments.
arXiv Detail & Related papers (2024-11-13T08:59:53Z) - Neuromorphic Computing and Sensing in Space [69.34740063574921]
Neuromorphic computer chips are designed to mimic the architecture of a biological brain.
The emphasis on low power and energy efficiency of neuromorphic devices is a perfect match for space applications.
arXiv Detail & Related papers (2022-12-10T07:46:29Z) - A Biologically-Inspired Dual Stream World Model [0.456877715768796]
The medial temporal lobe (MTL) is hypothesized to be an experience-construction system in mammals.
We propose a novel variant, the Dual Stream World Model (DSWM), which learns from high-dimensional observations and dissociates them into context and content streams.
We show that this representation is useful as a reinforcement learning basis function, and that the generative model can be used to aid the policy learning process using Dyna-like updates.
arXiv Detail & Related papers (2022-09-16T16:27:48Z) - OPEn: An Open-ended Physics Environment for Learning Without a Task [132.6062618135179]
We will study if models of the world learned in an open-ended physics environment, without any specific tasks, can be reused for downstream physics reasoning tasks.
We build a benchmark Open-ended Physics ENvironment (OPEn) and also design several tasks to test learning representations in this environment explicitly.
We find that an agent using unsupervised contrastive learning for representation learning, and impact-driven learning for exploration, achieved the best results.
arXiv Detail & Related papers (2021-10-13T17:48:23Z) - Environment Shaping in Reinforcement Learning using State Abstraction [63.444831173608605]
We propose a novel framework of emphenvironment shaping using state abstraction.
Our key idea is to compress the environment's large state space with noisy signals to an abstracted space.
We show that the agent's policy learnt in the shaped environment preserves near-optimal behavior in the original environment.
arXiv Detail & Related papers (2020-06-23T17:00:22Z) - Mutual Information Maximization for Robust Plannable Representations [82.83676853746742]
We present MIRO, an information theoretic representational learning algorithm for model-based reinforcement learning.
We show that our approach is more robust than reconstruction objectives in the presence of distractors and cluttered scenes.
arXiv Detail & Related papers (2020-05-16T21:58:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.