Explanation for Trajectory Planning using Multi-modal Large Language Model for Autonomous Driving
- URL: http://arxiv.org/abs/2411.09971v1
- Date: Fri, 15 Nov 2024 06:05:33 GMT
- Title: Explanation for Trajectory Planning using Multi-modal Large Language Model for Autonomous Driving
- Authors: Shota Yamazaki, Chenyu Zhang, Takuya Nanri, Akio Shigekane, Siyuan Wang, Jo Nishiyama, Tao Chu, Kohei Yokosawa,
- Abstract summary: We propose a reasoning model that takes future planning trajectories of the ego vehicle as inputs to solve this limitation.
In this study, we propose a reasoning model that takes future planning trajectories of the ego vehicle as inputs to solve this limitation with the dataset newly collected.
- Score: 6.873701251194593
- License:
- Abstract: End-to-end style autonomous driving models have been developed recently. These models lack interpretability of decision-making process from perception to control of the ego vehicle, resulting in anxiety for passengers. To alleviate it, it is effective to build a model which outputs captions describing future behaviors of the ego vehicle and their reason. However, the existing approaches generate reasoning text that inadequately reflects the future plans of the ego vehicle, because they train models to output captions using momentary control signals as inputs. In this study, we propose a reasoning model that takes future planning trajectories of the ego vehicle as inputs to solve this limitation with the dataset newly collected.
Related papers
- DiFSD: Ego-Centric Fully Sparse Paradigm with Uncertainty Denoising and Iterative Refinement for Efficient End-to-End Self-Driving [55.53171248839489]
We propose an ego-centric fully sparse paradigm, named DiFSD, for end-to-end self-driving.
Specifically, DiFSD mainly consists of sparse perception, hierarchical interaction and iterative motion planner.
Experiments conducted on nuScenes and Bench2Drive datasets demonstrate the superior planning performance and great efficiency of DiFSD.
arXiv Detail & Related papers (2024-09-15T15:55:24Z) - GenFollower: Enhancing Car-Following Prediction with Large Language Models [11.847589952558566]
We propose GenFollower, a novel zero-shot prompting approach that leverages large language models (LLMs) to address these challenges.
We reframe car-following behavior as a language modeling problem and integrate heterogeneous inputs into structured prompts for LLMs.
Experiments on Open datasets demonstrate GenFollower's superior performance and ability to provide interpretable insights.
arXiv Detail & Related papers (2024-07-08T04:54:42Z) - Tractable Joint Prediction and Planning over Discrete Behavior Modes for
Urban Driving [15.671811785579118]
We show that we can parameterize autoregressive closed-loop models without retraining.
We propose fully reactive closed-loop planning over discrete latent modes.
Our approach also outperforms the previous state-of-the-art in CARLA on challenging dense traffic scenarios.
arXiv Detail & Related papers (2024-03-12T01:00:52Z) - GenAD: Generative End-to-End Autonomous Driving [13.332272121018285]
GenAD is a generative framework that casts autonomous driving into a generative modeling problem.
We propose an instance-centric scene tokenizer that first transforms the surrounding scenes into map-aware instance tokens.
We then employ a variational autoencoder to learn the future trajectory distribution in a structural latent space for trajectory prior modeling.
arXiv Detail & Related papers (2024-02-18T08:21:05Z) - DME-Driver: Integrating Human Decision Logic and 3D Scene Perception in
Autonomous Driving [65.04871316921327]
This paper introduces a new autonomous driving system that enhances the performance and reliability of autonomous driving system.
DME-Driver utilizes a powerful vision language model as the decision-maker and a planning-oriented perception model as the control signal generator.
By leveraging this dataset, our model achieves high-precision planning accuracy through a logical thinking process.
arXiv Detail & Related papers (2024-01-08T03:06:02Z) - Is Ego Status All You Need for Open-Loop End-to-End Autonomous Driving? [84.17711168595311]
End-to-end autonomous driving has emerged as a promising research direction to target autonomy from a full-stack perspective.
nuScenes dataset, characterized by relatively simple driving scenarios, leads to an under-utilization of perception information in end-to-end models.
We introduce a new metric to evaluate whether the predicted trajectories adhere to the road.
arXiv Detail & Related papers (2023-12-05T11:32:31Z) - LLM4Drive: A Survey of Large Language Models for Autonomous Driving [62.10344445241105]
Large language models (LLMs) have demonstrated abilities including understanding context, logical reasoning, and generating answers.
In this paper, we systematically review a research line about textitLarge Language Models for Autonomous Driving (LLM4AD).
arXiv Detail & Related papers (2023-11-02T07:23:33Z) - Development and testing of an image transformer for explainable
autonomous driving systems [0.7046417074932257]
Deep learning (DL) approaches have been used successfully in computer vision (CV) applications.
DL-based CV models are generally considered to be black boxes due to their lack of interpretability.
We propose an explainable end-to-end autonomous driving system based on "Transformer", a state-of-the-art (SOTA) self-attention based model.
arXiv Detail & Related papers (2021-10-11T19:01:41Z) - Reason induced visual attention for explainable autonomous driving [2.090380922731455]
Deep learning (DL) based computer vision (CV) models are generally considered as black boxes due to poor interpretability.
This study is motivated by the need to enhance the interpretability of DL model in autonomous driving.
The proposed framework imitates the learning process of human drivers by jointly modeling the visual input (images) and natural language.
arXiv Detail & Related papers (2021-10-11T18:50:41Z) - Perceive, Predict, and Plan: Safe Motion Planning Through Interpretable
Semantic Representations [81.05412704590707]
We propose a novel end-to-end learnable network that performs joint perception, prediction and motion planning for self-driving vehicles.
Our network is learned end-to-end from human demonstrations.
arXiv Detail & Related papers (2020-08-13T14:40:46Z) - PiP: Planning-informed Trajectory Prediction for Autonomous Driving [69.41885900996589]
We propose planning-informed trajectory prediction (PiP) to tackle the prediction problem in the multi-agent setting.
By informing the prediction process with the planning of ego vehicle, our method achieves the state-of-the-art performance of multi-agent forecasting on highway datasets.
arXiv Detail & Related papers (2020-03-25T16:09:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.