From monoliths to modules: Decomposing transducers for efficient world modelling
- URL: http://arxiv.org/abs/2512.02193v1
- Date: Mon, 01 Dec 2025 20:37:43 GMT
- Title: From monoliths to modules: Decomposing transducers for efficient world modelling
- Authors: Alexander Boyd, Franz Nowak, David Hyland, Manuel Baltieri, Fernando E. Rosas,
- Abstract summary: We develop a framework for decomposing complex world models represented by transducers.<n>Our results clarify how to invert this process, deriving sub-transducers operating on distinct input-output subspaces.
- Score: 74.41506965793417
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: World models have been recently proposed as sandbox environments in which AI agents can be trained and evaluated before deployment. Although realistic world models often have high computational demands, efficient modelling is usually possible by exploiting the fact that real-world scenarios tend to involve subcomponents that interact in a modular manner. In this paper, we explore this idea by developing a framework for decomposing complex world models represented by transducers, a class of models generalising POMDPs. Whereas the composition of transducers is well understood, our results clarify how to invert this process, deriving sub-transducers operating on distinct input-output subspaces, enabling parallelizable and interpretable alternatives to monolithic world modelling that can support distributed inference. Overall, these results lay a groundwork for bridging the structural transparency demanded by AI safety and the computational efficiency required for real-world inference.
Related papers
- Continual learning and refinement of causal models through dynamic predicate invention [0.6198237241838559]
We propose a framework for constructing symbolic causal world models entirely online.<n>We leverage the power of Meta-Interpretive Learning and predicate invention to find semantically meaningful and reusable abstractions.
arXiv Detail & Related papers (2026-02-19T10:08:31Z) - From Word to World: Can Large Language Models be Implicit Text-based World Models? [82.47317196099907]
Agentic reinforcement learning increasingly relies on experience-driven scaling.<n>World models offer a potential way to improve learning efficiency through simulated experience.<n>We study whether large language models can reliably serve this role and under what conditions they meaningfully benefit agents.
arXiv Detail & Related papers (2025-12-21T17:28:42Z) - World Model Implanting for Test-time Adaptation of Embodied Agents [29.514831254621438]
In embodied AI, a persistent challenge is enabling agents to robustly adapt to novel domains without requiring extensive data collection or retraining.<n>We present a world model implanting framework (WorMI) that combines the reasoning capabilities of large language models with independently learned, domain-specific world models.<n>We evaluate our WorMI on the VirtualHome and ALFWorld benchmarks, demonstrating superior zero-shot and few-shot performance compared to several LLM-based approaches.
arXiv Detail & Related papers (2025-09-04T07:32:16Z) - AI in a vat: Fundamental limits of efficient world modelling for agent sandboxing and interpretability [84.52205243353761]
Recent work proposes using world models to generate controlled virtual environments in which AI agents can be tested before deployment.<n>We investigate ways of simplifying world models that remain agnostic to the AI agent under evaluation.
arXiv Detail & Related papers (2025-04-06T20:35:44Z) - Toward Universal and Interpretable World Models for Open-ended Learning Agents [0.0]
We introduce a generic, compositional and interpretable class of generative world models that supports open-ended learning agents.
This is a sparse class of Bayesian networks capable of approximating a broad range of processes, which provide agents with the ability to learn world models in a manner that may be both interpretable and computationally scalable.
arXiv Detail & Related papers (2024-09-27T12:03:15Z) - Decentralized Transformers with Centralized Aggregation are Sample-Efficient Multi-Agent World Models [106.35361897941898]
We propose a novel world model for Multi-Agent RL (MARL) that learns decentralized local dynamics for scalability.<n>We also introduce a Perceiver Transformer as an effective solution to enable centralized representation aggregation.<n>Results on Starcraft Multi-Agent Challenge (SMAC) show that it outperforms strong model-free approaches and existing model-based methods in both sample efficiency and overall performance.
arXiv Detail & Related papers (2024-06-22T12:40:03Z) - Universal In-Context Approximation By Prompting Fully Recurrent Models [86.61942787684272]
We show that RNNs, LSTMs, GRUs, Linear RNNs, and linear gated architectures can serve as universal in-context approximators.
We introduce a programming language called LSRL that compiles to fully recurrent architectures.
arXiv Detail & Related papers (2024-06-03T15:25:13Z) - S2RMs: Spatially Structured Recurrent Modules [105.0377129434636]
We take a step towards exploiting dynamic structure that are capable of simultaneously exploiting both modular andtemporal structures.
We find our models to be robust to the number of available views and better capable of generalization to novel tasks without additional training.
arXiv Detail & Related papers (2020-07-13T17:44:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.