WorldCompass: Reinforcement Learning for Long-Horizon World Models
- URL: http://arxiv.org/abs/2602.09022v1
- Date: Mon, 09 Feb 2026 18:59:47 GMT
- Title: WorldCompass: Reinforcement Learning for Long-Horizon World Models
- Authors: Zehan Wang, Tengfei Wang, Haiyu Zhang, Xuhui Zuo, Junta Wu, Haoyuan Wang, Wenqiang Sun, Zhenwei Wang, Chenjie Cao, Hengshuang Zhao, Chunchao Guo, Zhou Zhao,
- Abstract summary: This work presents World, a novel Reinforcement Learning (RL) framework for interactive video-based world models.<n>We introduce three core innovations tailored to the autoregressive video generation paradigm.<n>We show that World significantly improves interaction accuracy and visual fidelity across various scenarios.
- Score: 81.03997753254023
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work presents WorldCompass, a novel Reinforcement Learning (RL) post-training framework for the long-horizon, interactive video-based world models, enabling them to explore the world more accurately and consistently based on interaction signals. To effectively "steer" the world model's exploration, we introduce three core innovations tailored to the autoregressive video generation paradigm: 1) Clip-level rollout Strategy: We generate and evaluate multiple samples at a single target clip, which significantly boosts rollout efficiency and provides fine-grained reward signals. 2) Complementary Reward Functions: We design reward functions for both interaction-following accuracy and visual quality, which provide direct supervision and effectively suppress reward-hacking behaviors. 3) Efficient RL Algorithm: We employ the negative-aware fine-tuning strategy coupled with various efficiency optimizations to efficiently and effectively enhance model capacity. Evaluations on the SoTA open-source world model, WorldPlay, demonstrate that WorldCompass significantly improves interaction accuracy and visual fidelity across various scenarios.
Related papers
- Causal World Modeling for Robot Control [56.31803788587547]
Video world models provide the ability to imagine the near future by understanding the causality between actions and visual dynamics.<n>We introduce LingBot-VA, an autoregressive diffusion framework that learns frame prediction and policy execution simultaneously.<n>We evaluate our model on both simulation benchmarks and real-world scenarios, where it shows significant promise in long-horizon manipulation, data efficiency in post-training, and strong generalizability to novel configurations.
arXiv Detail & Related papers (2026-01-29T17:07:43Z) - Dual-Stream Diffusion for World-Model Augmented Vision-Language-Action Model [62.889356203346985]
We propose DUal-STream diffusion (DUST), a world-model augmented VLA framework that handles the modality conflict.<n>DUST achieves up to 6% gains over a standard VLA baseline and implicit world-modeling methods.<n>On real-world tasks with the Franka Research 3, DUST outperforms baselines in success rate by 13%.
arXiv Detail & Related papers (2025-10-31T16:32:12Z) - Co-Evolving Latent Action World Models [57.48921576959243]
Adapting pre-trained video models into controllable world models via latent actions is a promising step towards creating generalist world models.<n>We propose CoLA-World, which for the first time successfully realizes this synergistic paradigm.<n>This unlocks a co-evolution cycle: the world model acts as a knowledgeable tutor, providing gradients to shape a high-quality LAM.
arXiv Detail & Related papers (2025-10-30T12:28:40Z) - Reinforcement Learning with Inverse Rewards for World Model Post-training [29.19830208692156]
We propose Reinforcement Learning with Inverse Rewards to improve action-following in video world models.<n>RLIR derives verifiable reward signals by recovering input actions from generated videos using an Inverse Dynamics Model.
arXiv Detail & Related papers (2025-09-28T16:27:47Z) - Policy-Driven World Model Adaptation for Robust Offline Model-based Reinforcement Learning [6.189693079685375]
offline model-based RL (MBRL) explicitly learns a world model from a static dataset.<n>We propose a framework that dynamically adapts the world model alongside the policy.<n>We benchmark our algorithm on twelve noisy D4RL MuJoCo tasks and three Tokamak Control tasks, demonstrating its state-of-the-art performance.
arXiv Detail & Related papers (2025-05-19T20:14:33Z) - Video-Enhanced Offline Reinforcement Learning: A Model-Based Approach [55.76249793590689]
Video-Enhanced Offline RL (VeoRL) is a model-based method that constructs an interactive world model from diverse, unlabeled video data readily available online.<n>VeoRL achieves substantial performance gains across visual control tasks in robotic manipulation, autonomous driving, and open-world video games.
arXiv Detail & Related papers (2025-05-10T00:54:12Z) - AdaWorld: Learning Adaptable World Models with Latent Actions [76.50869178593733]
We propose AdaWorld, an innovative world model learning approach that enables efficient adaptation.<n>Key idea is to incorporate action information during the pretraining of world models.<n>We then develop an autoregressive world model that conditions on these latent actions.
arXiv Detail & Related papers (2025-03-24T17:58:15Z) - HarmonyDream: Task Harmonization Inside World Models [93.07314830304193]
Model-based reinforcement learning (MBRL) holds the promise of sample-efficient learning.
We propose a simple yet effective approach, HarmonyDream, which automatically adjusts loss coefficients to maintain task harmonization.
arXiv Detail & Related papers (2023-09-30T11:38:13Z) - VDFD: Multi-Agent Value Decomposition Framework with Disentangled World Model [10.36125908359289]
We propose a novel model-based multi-agent reinforcement learning approach named Value Decomposition Framework with Disentangled World Model.<n>Our method achieves high sample efficiency and exhibits superior performance compared to other baselines across a wide range of multi-agent learning tasks.
arXiv Detail & Related papers (2023-09-08T22:12:43Z) - GARNet: Global-Aware Multi-View 3D Reconstruction Network and the
Cost-Performance Tradeoff [10.8606881536924]
We propose a global-aware attention-based fusion approach that builds the correlation between each branch and the global to provide a comprehensive foundation for weights inference.
In order to enhance the ability of the network, we introduce a novel loss function to supervise the shape overall.
Experiments on ShapeNet verify that our method outperforms existing SOTA methods.
arXiv Detail & Related papers (2022-11-04T07:45:19Z) - On the Feasibility of Cross-Task Transfer with Model-Based Reinforcement
Learning [45.73223325256312]
We investigate whether internal models learned by modern model-based RL algorithms can be leveraged to solve new, distinctly different tasks faster.
We propose Model-Based Cross-Task Transfer (XTRA), a framework for sample-efficient online RL with scalable pretraining and finetuning of learned world models.
arXiv Detail & Related papers (2022-10-19T17:57:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.