VehicleWorld: A Highly Integrated Multi-Device Environment for Intelligent Vehicle Interaction
- URL: http://arxiv.org/abs/2509.06736v1
- Date: Mon, 08 Sep 2025 14:28:25 GMT
- Title: VehicleWorld: A Highly Integrated Multi-Device Environment for Intelligent Vehicle Interaction
- Authors: Jie Yang, Jiajun Chen, Zhangyue Yin, Shuo Chen, Yuxin Wang, Yiran Guo, Yuan Li, Yining Zheng, Xuanjing Huang, Xipeng Qiu,
- Abstract summary: VehicleWorld is the first comprehensive environment for the automotive domain, featuring 30 modules, 250 APIs, and 680 properties.<n>We propose State-based Function Call (SFC), a novel approach that maintains explicit system state awareness and implements direct state transitions to achieve target conditions.
- Score: 78.34534983766973
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intelligent vehicle cockpits present unique challenges for API Agents, requiring coordination across tightly-coupled subsystems that exceed typical task environments' complexity. Traditional Function Calling (FC) approaches operate statelessly, requiring multiple exploratory calls to build environmental awareness before execution, leading to inefficiency and limited error recovery. We introduce VehicleWorld, the first comprehensive environment for the automotive domain, featuring 30 modules, 250 APIs, and 680 properties with fully executable implementations that provide real-time state information during agent execution. This environment enables precise evaluation of vehicle agent behaviors across diverse, challenging scenarios. Through systematic analysis, we discovered that direct state prediction outperforms function calling for environmental control. Building on this insight, we propose State-based Function Call (SFC), a novel approach that maintains explicit system state awareness and implements direct state transitions to achieve target conditions. Experimental results demonstrate that SFC significantly outperforms traditional FC approaches, achieving superior execution accuracy and reduced latency. We have made all implementation code publicly available on Github https://github.com/OpenMOSS/VehicleWorld.
Related papers
- Computer-Using World Model [58.59112582915026]
We introduce the Computer-Using World Model (CUWM), a world model for desktop software that predicts the next user interface (UI) state.<n> CUWM first predicts a textual description of agent-relevant state changes, and then realizes these changes visually to synthesize the next screenshot.<n>We evaluate CUWM via test-time action search, where a frozen agent uses the world model to simulate and compare candidate actions before execution.
arXiv Detail & Related papers (2026-02-19T13:48:29Z) - A Unified Experimental Architecture for Informative Path Planning: from Simulation to Deployment with GuadalPlanner [69.43049144653882]
This paper introduces a unified architecture that decouples high-level decision-making from vehicle-specific control.<n>The proposed architecture is realized through GuadalPlanner, which defines standardized interfaces between planning, sensing, and vehicle execution.
arXiv Detail & Related papers (2026-02-11T10:02:31Z) - SGDrive: Scene-to-Goal Hierarchical World Cognition for Autonomous Driving [52.02379432801349]
We propose SGDrive, a novel framework that structures the VLM's representation learning around driving-specific knowledge hierarchies.<n>Built upon a pre-trained VLM backbone, SGDrive decomposes driving understanding into a scene-agent-goal hierarchy that mirrors human driving cognition.
arXiv Detail & Related papers (2026-01-09T08:55:42Z) - Unlocking Smarter Device Control: Foresighted Planning with a World Model-Driven Code Execution Approach [82.27842884709378]
We propose a framework that prioritizes natural language understanding and structured reasoning to enhance the agent's global understanding of the environment.<n>Our method outperforms previous approaches, particularly achieving a 44.4% relative improvement in task success rate.
arXiv Detail & Related papers (2025-05-22T09:08:47Z) - DriveTransformer: Unified Transformer for Scalable End-to-End Autonomous Driving [62.62464518137153]
DriveTransformer is a simplified E2E-AD framework for the ease of scaling up.<n>It is composed of three unified operations: task self-attention, sensor cross-attention, temporal cross-attention.<n>It achieves state-of-the-art performance in both simulated closed-loop benchmark Bench2Drive and real world open-loop benchmark nuScenes with high FPS.
arXiv Detail & Related papers (2025-03-07T11:41:18Z) - Automating a Complete Software Test Process Using LLMs: An Automotive Case Study [10.245216059506236]
Vehicle API testing verifies whether the interactions between a vehicle's internal systems and external applications meet expectations.<n>This paper presents a system designed for the automated testing of in-vehicle APIs.
arXiv Detail & Related papers (2025-02-06T12:10:01Z) - EI-Drive: A Platform for Cooperative Perception with Realistic Communication Models [16.023748778830562]
EI-Drive is an edge-AI based autonomous driving simulation platform.<n>It integrates advanced cooperative perception with more realistic communication models.<n> Experiments using EI-Drive demonstrate significant improvements in vehicle safety and performance.
arXiv Detail & Related papers (2024-12-13T01:37:44Z) - You Only Look at Screens: Multimodal Chain-of-Action Agents [37.118034745972956]
Auto-GUI is a multimodal solution that directly interacts with the interface.
We propose a chain-of-action technique to help the agent decide what action to execute.
We evaluate our approach on a new device-control benchmark AITW with 30$K$ unique instructions.
arXiv Detail & Related papers (2023-09-20T16:12:32Z) - Context-Aware Timewise VAEs for Real-Time Vehicle Trajectory Prediction [4.640835690336652]
We present ContextVAE, a context-aware approach for multi-modal vehicle trajectory prediction.
Our approach takes into account both the social features exhibited by agents on the scene and the physical environment constraints.
In all tested datasets, ContextVAE models are fast to train and provide high-quality multi-modal predictions in real-time.
arXiv Detail & Related papers (2023-02-21T18:42:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.