Cross-View World Models
- URL: http://arxiv.org/abs/2602.07277v1
- Date: Sat, 07 Feb 2026 00:02:15 GMT
- Title: Cross-View World Models
- Authors: Rishabh Sharma, Gijs Hogervorst, Wayne E. Mackey, David J. Heeger, Stefano Martiniani,
- Abstract summary: We introduce Cross-View World Models (XVWM), trained with a cross-view prediction objective.<n>We train on synchronized multi-view gameplay data from Aimlabs.<n>Our results show that multi-view consistency provides a strong learning signal for spatially grounded representations.
- Score: 3.7896239978609434
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: World models enable agents to plan by imagining future states, but existing approaches operate from a single viewpoint, typically egocentric, even when other perspectives would make planning easier; navigation, for instance, benefits from a bird's-eye view. We introduce Cross-View World Models (XVWM), trained with a cross-view prediction objective: given a sequence of frames from one viewpoint, predict the future state from the same or a different viewpoint after an action is taken. Enforcing cross-view consistency acts as geometric regularization: because the input and output views may share little or no visual overlap, to predict across viewpoints, the model must learn view-invariant representations of the environment's 3D structure. We train on synchronized multi-view gameplay data from Aimlabs, an aim-training platform providing precisely aligned multi-camera recordings with high-frequency action labels. The resulting model gives agents parallel imagination streams across viewpoints, enabling planning in whichever frame of reference best suits the task while executing from the egocentric view. Our results show that multi-view consistency provides a strong learning signal for spatially grounded representations. Finally, predicting the consequences of one's actions from another viewpoint may offer a foundation for perspective-taking in multi-agent settings.
Related papers
- Causal World Modeling for Robot Control [56.31803788587547]
Video world models provide the ability to imagine the near future by understanding the causality between actions and visual dynamics.<n>We introduce LingBot-VA, an autoregressive diffusion framework that learns frame prediction and policy execution simultaneously.<n>We evaluate our model on both simulation benchmarks and real-world scenarios, where it shows significant promise in long-horizon manipulation, data efficiency in post-training, and strong generalizability to novel configurations.
arXiv Detail & Related papers (2026-01-29T17:07:43Z) - Towards High-Consistency Embodied World Model with Multi-View Trajectory Videos [24.111891848073288]
Embodied world models aim to predict and interact with the physical world through visual observations and actions.<n>MTV-World introduces Multi-view Trajectory-Video control for precise visuomotor prediction.<n>MTV-World achieves precise control execution and accurate physical interaction modeling in complex dual-arm scenarios.
arXiv Detail & Related papers (2025-11-17T02:17:04Z) - Generalist Forecasting with Frozen Video Models via Latent Diffusion [35.96406989431198]
We show a strong correlation between a vision model's perceptual ability and its generalist forecasting performance over short time horizons.<n>Our results highlight the value of bridging representation learning and generative modeling for temporally grounded video understanding.
arXiv Detail & Related papers (2025-07-18T14:14:19Z) - Attention-Aware Multi-View Pedestrian Tracking [21.393389135740712]
Recent multi-view pedestrian detection models have highlighted the potential of an early-fusion strategy.<n>This strategy has been shown to improve both detection and tracking performance.<n>We propose a novel model that incorporates attention mechanisms in a multi-view pedestrian tracking scenario.
arXiv Detail & Related papers (2025-04-03T21:53:08Z) - AR-1-to-3: Single Image to Consistent 3D Object Generation via Next-View Prediction [69.65671384868344]
We propose AR-1-to-3, a novel next-view prediction paradigm based on diffusion models.<n>We show that our method significantly improves the consistency between the generated views and the input views, producing high-fidelity 3D assets.
arXiv Detail & Related papers (2025-03-17T08:39:10Z) - Freeview Sketching: View-Aware Fine-Grained Sketch-Based Image Retrieval [85.73149096516543]
We address the choice of viewpoint during sketch creation in Fine-Grained Sketch-Based Image Retrieval (FG-SBIR)
A pilot study highlights the system's struggle when query-sketches differ in viewpoint from target instances.
To reconcile this, we advocate for a view-aware system, seamlessly accommodating both view-agnostic and view-specific tasks.
arXiv Detail & Related papers (2024-07-01T21:20:44Z) - Cohere3D: Exploiting Temporal Coherence for Unsupervised Representation
Learning of Vision-based Autonomous Driving [73.3702076688159]
We propose a novel contrastive learning algorithm, Cohere3D, to learn coherent instance representations in a long-term input sequence.
We evaluate our algorithm by finetuning the pretrained model on various downstream perception, prediction, and planning tasks.
arXiv Detail & Related papers (2024-02-23T19:43:01Z) - Rethinking Range View Representation for LiDAR Segmentation [66.73116059734788]
"Many-to-one" mapping, semantic incoherence, and shape deformation are possible impediments against effective learning from range view projections.
We present RangeFormer, a full-cycle framework comprising novel designs across network architecture, data augmentation, and post-processing.
We show that, for the first time, a range view method is able to surpass the point, voxel, and multi-view fusion counterparts in the competing LiDAR semantic and panoptic segmentation benchmarks.
arXiv Detail & Related papers (2023-03-09T16:13:27Z) - BEVerse: Unified Perception and Prediction in Birds-Eye-View for
Vision-Centric Autonomous Driving [92.05963633802979]
We present BEVerse, a unified framework for 3D perception and prediction based on multi-camera systems.
We show that the multi-task BEVerse outperforms single-task methods on 3D object detection, semantic map construction, and motion prediction.
arXiv Detail & Related papers (2022-05-19T17:55:35Z) - KINet: Unsupervised Forward Models for Robotic Pushing Manipulation [8.572983995175909]
We introduce KINet -- an unsupervised framework to reason about object interactions based on a keypoint representation.
Our model learns to associate objects with keypoint coordinates and discovers a graph representation of the system.
By learning to perform physical reasoning in the keypoint space, our model automatically generalizes to scenarios with a different number of objects.
arXiv Detail & Related papers (2022-02-18T03:32:08Z) - Weak Multi-View Supervision for Surface Mapping Estimation [0.9367260794056769]
We propose a weakly-supervised multi-view learning approach to learn category-specific surface mapping without dense annotations.
We learn the underlying surface geometry of common categories, such as human faces, cars, and airplanes, given instances from those categories.
arXiv Detail & Related papers (2021-05-04T09:46:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.