ADriver-I: A General World Model for Autonomous Driving
- URL: http://arxiv.org/abs/2311.13549v1
- Date: Wed, 22 Nov 2023 17:44:29 GMT
- Title: ADriver-I: A General World Model for Autonomous Driving
- Authors: Fan Jia, Weixin Mao, Yingfei Liu, Yucheng Zhao, Yuqing Wen, Chi Zhang,
Xiangyu Zhang, Tiancai Wang
- Abstract summary: We introduce the concept of interleaved vision-action pair, which unifies the format of visual features and control signals.
Based on the vision-action pairs, we construct a general world model based on MLLM and diffusion model for autonomous driving, termed ADriver-I.
It takes the vision-action pairs as inputs and autoregressively predicts the control signal of the current frame.
- Score: 23.22507419707926
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Typically, autonomous driving adopts a modular design, which divides the full
stack into perception, prediction, planning and control parts. Though
interpretable, such modular design tends to introduce a substantial amount of
redundancy. Recently, multimodal large language models (MLLM) and diffusion
techniques have demonstrated their superior performance on comprehension and
generation ability. In this paper, we first introduce the concept of
interleaved vision-action pair, which unifies the format of visual features and
control signals. Based on the vision-action pairs, we construct a general world
model based on MLLM and diffusion model for autonomous driving, termed
ADriver-I. It takes the vision-action pairs as inputs and autoregressively
predicts the control signal of the current frame. The generated control signals
together with the historical vision-action pairs are further conditioned to
predict the future frames. With the predicted next frame, ADriver-I performs
further control signal prediction. Such a process can be repeated infinite
times, ADriver-I achieves autonomous driving in the world created by itself.
Extensive experiments are conducted on nuScenes and our large-scale private
datasets. ADriver-I shows impressive performance compared to several
constructed baselines. We hope our ADriver-I can provide some new insights for
future autonomous driving and embodied intelligence.
Related papers
- BEVWorld: A Multimodal World Model for Autonomous Driving via Unified BEV Latent Space [57.68134574076005]
We present BEVWorld, a novel approach that tokenizes multimodal sensor inputs into a unified and compact Bird's Eye View latent space for environment modeling.
Experiments demonstrate the effectiveness of BEVWorld in autonomous driving tasks, showcasing its capability in generating future scenes and benefiting downstream tasks such as perception and motion prediction.
arXiv Detail & Related papers (2024-07-08T07:26:08Z) - GenAD: Generalized Predictive Model for Autonomous Driving [75.39517472462089]
We introduce the first large-scale video prediction model in the autonomous driving discipline.
Our model, dubbed GenAD, handles the challenging dynamics in driving scenes with novel temporal reasoning blocks.
It can be adapted into an action-conditioned prediction model or a motion planner, holding great potential for real-world driving applications.
arXiv Detail & Related papers (2024-03-14T17:58:33Z) - GenAD: Generative End-to-End Autonomous Driving [13.332272121018285]
GenAD is a generative framework that casts autonomous driving into a generative modeling problem.
We propose an instance-centric scene tokenizer that first transforms the surrounding scenes into map-aware instance tokens.
We then employ a variational autoencoder to learn the future trajectory distribution in a structural latent space for trajectory prior modeling.
arXiv Detail & Related papers (2024-02-18T08:21:05Z) - DME-Driver: Integrating Human Decision Logic and 3D Scene Perception in
Autonomous Driving [65.04871316921327]
This paper introduces a new autonomous driving system that enhances the performance and reliability of autonomous driving system.
DME-Driver utilizes a powerful vision language model as the decision-maker and a planning-oriented perception model as the control signal generator.
By leveraging this dataset, our model achieves high-precision planning accuracy through a logical thinking process.
arXiv Detail & Related papers (2024-01-08T03:06:02Z) - DriveMLM: Aligning Multi-Modal Large Language Models with Behavioral
Planning States for Autonomous Driving [69.82743399946371]
DriveMLM is a framework that can perform close-loop autonomous driving in realistic simulators.
We employ a multi-modal LLM (MLLM) to model the behavior planning module of a module AD system.
This model can plug-and-play in existing AD systems such as Apollo for close-loop driving.
arXiv Detail & Related papers (2023-12-14T18:59:05Z) - Driving into the Future: Multiview Visual Forecasting and Planning with
World Model for Autonomous Driving [56.381918362410175]
Drive-WM is the first driving world model compatible with existing end-to-end planning models.
Our model generates high-fidelity multiview videos in driving scenes.
arXiv Detail & Related papers (2023-11-29T18:59:47Z) - Drive Anywhere: Generalizable End-to-end Autonomous Driving with
Multi-modal Foundation Models [114.69732301904419]
We present an approach to apply end-to-end open-set (any environment/scene) autonomous driving that is capable of providing driving decisions from representations queryable by image and text.
Our approach demonstrates unparalleled results in diverse tests while achieving significantly greater robustness in out-of-distribution situations.
arXiv Detail & Related papers (2023-10-26T17:56:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.