TARDIS STRIDE: A Spatio-Temporal Road Image Dataset and World Model for Autonomy
- URL: http://arxiv.org/abs/2506.11302v3
- Date: Thu, 19 Jun 2025 15:10:29 GMT
- Title: TARDIS STRIDE: A Spatio-Temporal Road Image Dataset and World Model for Autonomy
- Authors: Héctor Carrión, Yutong Bai, Víctor A. Hernández Castro, Kishan Panaganti, Ayush Zenith, Matthew Trang, Tony Zhang, Pietro Perona, Jitendra Malik,
- Abstract summary: We show how to permuting 360-degree panoramic imagery into rich interconnected observation, state and action nodes.<n>We benchmark this dataset via TARDIS, a transformer-based generative world model.<n>We demonstrate robust performance across a range of agentic tasks such as controllable image synthesis, instruction following, autonomous self-control, and state-of-the-art georeferencing.
- Score: 44.85881816317044
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: World models aim to simulate environments and enable effective agent behavior. However, modeling real-world environments presents unique challenges as they dynamically change across both space and, crucially, time. To capture these composed dynamics, we introduce a Spatio-Temporal Road Image Dataset for Exploration (STRIDE) permuting 360-degree panoramic imagery into rich interconnected observation, state and action nodes. Leveraging this structure, we can simultaneously model the relationship between egocentric views, positional coordinates, and movement commands across both space and time. We benchmark this dataset via TARDIS, a transformer-based generative world model that integrates spatial and temporal dynamics through a unified autoregressive framework trained on STRIDE. We demonstrate robust performance across a range of agentic tasks such as controllable photorealistic image synthesis, instruction following, autonomous self-control, and state-of-the-art georeferencing. These results suggest a promising direction towards sophisticated generalist agents--capable of understanding and manipulating the spatial and temporal aspects of their material environments--with enhanced embodied reasoning capabilities. Training code, datasets, and model checkpoints are made available at https://huggingface.co/datasets/Tera-AI/STRIDE.
Related papers
- ArmGS: Composite Gaussian Appearance Refinement for Modeling Dynamic Urban Environments [22.371417505012566]
This work focuses on modeling dynamic urban environments for autonomous driving simulation.<n>We propose a new approach named ArmGS that exploits composite driving Gaussian splatting with multi-granularity appearance refinement.<n>This not only models global scene appearance variations between frames and camera viewpoints, but also models local fine-grained photorealistic changes of background and objects.
arXiv Detail & Related papers (2025-07-05T03:54:40Z) - Dyn-O: Building Structured World Models with Object-Centric Representations [42.65409148846005]
We introduce Dyn-O, an enhanced structured world model built upon object-centric representations.<n>Compared to prior work in object-centric representations, Dyn-O improves in both learning representations and modeling dynamics.<n>We find that our method can learn object-centric world models directly from pixel observations, outperforming DreamerV3 in rollout prediction accuracy.
arXiv Detail & Related papers (2025-07-04T05:06:15Z) - GeoDrive: 3D Geometry-Informed Driving World Model with Precise Action Control [50.67481583744243]
We introduce GeoDrive, which explicitly integrates robust 3D geometry conditions into driving world models.<n>We propose a dynamic editing module during training to enhance the renderings by editing the positions of the vehicles.<n>Our method significantly outperforms existing models in both action accuracy and 3D spatial awareness.
arXiv Detail & Related papers (2025-05-28T14:46:51Z) - TiMo: Spatiotemporal Foundation Model for Satellite Image Time Series [39.22426645737932]
TiMo is a novel hierarchical vision transformer foundation model tailored for SITS analysis.<n>At its core, we introduce atemporal attention mechanism that dynamically captures multiscale patterns across both time and space.<n>Extensive experiments across multipletemporal tasks-including deforestation monitoring-demonstrate TiMo's superiority over state-of-theart methods.
arXiv Detail & Related papers (2025-05-13T16:35:11Z) - LaM-SLidE: Latent Space Modeling of Spatial Dynamical Systems via Linked Entities [11.76748620770499]
We present LaM-SLidE (Latent Space Modeling of Spatial Dynamical Systems via Linked Entities)<n>LaM-SLidE bridges the gap between: (1) keeping the traceability of individual entities in a latent system representation, and (2) leveraging the efficiency and scalability of recent advances in image and video generation.<n>We show that LaM-SLidE performs favorably in terms of speed, accuracy, and generalizability.
arXiv Detail & Related papers (2025-02-17T18:49:13Z) - TriHuman : A Real-time and Controllable Tri-plane Representation for
Detailed Human Geometry and Appearance Synthesis [76.73338151115253]
TriHuman is a novel human-tailored, deformable, and efficient tri-plane representation.
We non-rigidly warp global ray samples into our undeformed tri-plane texture space.
We show how such a tri-plane feature representation can be conditioned on the skeletal motion to account for dynamic appearance and geometry changes.
arXiv Detail & Related papers (2023-12-08T16:40:38Z) - Spatio-temporal Tendency Reasoning for Human Body Pose and Shape
Estimation from Videos [10.50306784245168]
We present atemporal tendency reasoning (STR) network for recovering human body pose shape from videos.
Our STR aims to learn accurate and spatial motion sequences in an unconstrained environment.
Our STR remains competitive with the state-of-the-art on three datasets.
arXiv Detail & Related papers (2022-10-07T16:09:07Z) - A Spatio-Temporal Multilayer Perceptron for Gesture Recognition [70.34489104710366]
We propose a multilayer state-weighted perceptron for gesture recognition in the context of autonomous vehicles.
An evaluation of TCG and Drive&Act datasets is provided to showcase the promising performance of our approach.
We deploy our model to our autonomous vehicle to show its real-time capability and stable execution.
arXiv Detail & Related papers (2022-04-25T08:42:47Z) - Tracking and Planning with Spatial World Models [17.698319441265223]
We introduce a method for real-time navigation and tracking with differentiably rendered world models.
We achieve up to 92% navigation success rate at a frequency of 15 Hz using only image and depth observations.
arXiv Detail & Related papers (2022-01-25T14:16:46Z) - TRiPOD: Human Trajectory and Pose Dynamics Forecasting in the Wild [77.59069361196404]
TRiPOD is a novel method for predicting body dynamics based on graph attentional networks.
To incorporate a real-world challenge, we learn an indicator representing whether an estimated body joint is visible/invisible at each frame.
Our evaluation shows that TRiPOD outperforms all prior work and state-of-the-art specifically designed for each of the trajectory and pose forecasting tasks.
arXiv Detail & Related papers (2021-04-08T20:01:00Z) - CLOUD: Contrastive Learning of Unsupervised Dynamics [19.091886595825947]
We propose to learn forward and inverse dynamics in a fully unsupervised manner via contrastive estimation.
We demonstrate the efficacy of our approach across a variety of tasks including goal-directed planning and imitation from observations.
arXiv Detail & Related papers (2020-10-23T15:42:57Z) - A Spatial-Temporal Attentive Network with Spatial Continuity for
Trajectory Prediction [74.00750936752418]
We propose a novel model named spatial-temporal attentive network with spatial continuity (STAN-SC)
First, spatial-temporal attention mechanism is presented to explore the most useful and important information.
Second, we conduct a joint feature sequence based on the sequence and instant state information to make the generative trajectories keep spatial continuity.
arXiv Detail & Related papers (2020-03-13T04:35:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.