ADS-Edit: A Multimodal Knowledge Editing Dataset for Autonomous Driving Systems
- URL: http://arxiv.org/abs/2503.20756v1
- Date: Wed, 26 Mar 2025 17:45:29 GMT
- Title: ADS-Edit: A Multimodal Knowledge Editing Dataset for Autonomous Driving Systems
- Authors: Chenxi Wang, Jizhan Fang, Xiang Chen, Bozhong Tian, Ziwen Xu, Huajun Chen, Ningyu Zhang,
- Abstract summary: Large Multimodal Models (LMMs) have shown promise in Autonomous Driving Systems (ADS)<n>We propose the use of Knowledge Editing, which enables targeted modifications to a model's behavior without the need for full retraining.<n>We introduce ADS-Edit, a multimodal knowledge editing dataset specifically designed for ADS.
- Score: 38.56967793184516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advancements in Large Multimodal Models (LMMs) have shown promise in Autonomous Driving Systems (ADS). However, their direct application to ADS is hindered by challenges such as misunderstanding of traffic knowledge, complex road conditions, and diverse states of vehicle. To address these challenges, we propose the use of Knowledge Editing, which enables targeted modifications to a model's behavior without the need for full retraining. Meanwhile, we introduce ADS-Edit, a multimodal knowledge editing dataset specifically designed for ADS, which includes various real-world scenarios, multiple data types, and comprehensive evaluation metrics. We conduct comprehensive experiments and derive several interesting conclusions. We hope that our work will contribute to the further advancement of knowledge editing applications in the field of autonomous driving. Code and data are available in https://github.com/zjunlp/EasyEdit.
Related papers
- Foundation Models for Autonomous Driving System: An Initial Roadmap [17.198146951189635]
Recent advancements in Foundation Models (FMs) have significantly enhanced Autonomous Driving Systems (ADSs)
ADSs are highly complex cyber-physical systems that demand rigorous software engineering practices to ensure reliability and safety.
We present a structured roadmap for integrating FMs into autonomous driving, covering three key aspects: the infrastructure of FMs, their application in autonomous driving systems, and their current applications in practice.
arXiv Detail & Related papers (2025-04-01T15:45:31Z) - The Role of World Models in Shaping Autonomous Driving: A Comprehensive Survey [50.62538723793247]
Driving World Model (DWM) focuses on predicting scene evolution during the driving process.<n>DWM methods enable autonomous driving systems to better perceive, understand, and interact with dynamic driving environments.
arXiv Detail & Related papers (2025-02-14T18:43:15Z) - V2V-LLM: Vehicle-to-Vehicle Cooperative Autonomous Driving with Multi-Modal Large Language Models [31.537045261401666]
We propose a novel problem setting that integrates a Multi-Modal Large Language Model into cooperative autonomous driving.
We also propose our baseline method Vehicle-to-Vehicle Multi-Modal Large Language Model (V2V-LLM)
Experimental results show that our proposed V2V-LLM can be a promising unified model architecture for performing various tasks in cooperative autonomous driving.
arXiv Detail & Related papers (2025-02-14T08:05:41Z) - Application of Multimodal Large Language Models in Autonomous Driving [1.8181868280594944]
We conduct in-depth study on implementing the Multi-modal Large Language Model.<n>We address problems with the poor performance of MLLM on Autonomous Driving.<n>We then break down the AD decision-making process by scene understanding, prediction, and decision-making.
arXiv Detail & Related papers (2024-12-21T00:09:52Z) - DriveMM: All-in-One Large Multimodal Model for Autonomous Driving [63.882827922267666]
DriveMM is a large multimodal model designed to process diverse data inputs, such as images and multi-view videos, while performing a broad spectrum of autonomous driving tasks.<n>We conduct evaluations on six public benchmarks and undertake zero-shot transfer on an unseen dataset, where DriveMM achieves state-of-the-art performance across all tasks.
arXiv Detail & Related papers (2024-12-10T17:27:32Z) - DriveMLM: Aligning Multi-Modal Large Language Models with Behavioral
Planning States for Autonomous Driving [69.82743399946371]
DriveMLM is a framework that can perform close-loop autonomous driving in realistic simulators.
We employ a multi-modal LLM (MLLM) to model the behavior planning module of a module AD system.
This model can plug-and-play in existing AD systems such as Apollo for close-loop driving.
arXiv Detail & Related papers (2023-12-14T18:59:05Z) - LLM4Drive: A Survey of Large Language Models for Autonomous Driving [62.10344445241105]
Large language models (LLMs) have demonstrated abilities including understanding context, logical reasoning, and generating answers.
In this paper, we systematically review a research line about textitLarge Language Models for Autonomous Driving (LLM4AD).
arXiv Detail & Related papers (2023-11-02T07:23:33Z) - Bayesian Optimization and Deep Learning forsteering wheel angle
prediction [58.720142291102135]
This work aims to obtain an accurate model for the prediction of the steering angle in an automated driving system.
BO was able to identify, within a limited number of trials, a model -- namely BOST-LSTM -- which resulted, the most accurate when compared to classical end-to-end driving models.
arXiv Detail & Related papers (2021-10-22T15:25:14Z) - Hidden Incentives for Auto-Induced Distributional Shift [11.295927026302573]
We introduce the term auto-induced distributional shift (ADS) to describe the phenomenon of an algorithm causing a change in the distribution of its own inputs.
Our goal is to ensure that machine learning systems do not leverage ADS to increase performance when doing so could be undesirable.
We demonstrate that changes to the learning algorithm, such as the introduction of meta-learning, can cause hidden incentives for auto-induced distributional shift (HI-ADS) to be revealed.
arXiv Detail & Related papers (2020-09-19T03:31:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.