Dolphins: Multimodal Language Model for Driving
- URL: http://arxiv.org/abs/2312.00438v1
- Date: Fri, 1 Dec 2023 09:10:33 GMT
- Title: Dolphins: Multimodal Language Model for Driving
- Authors: Yingzi Ma, Yulong Cao, Jiachen Sun, Marco Pavone, Chaowei Xiao
- Abstract summary: We introduce Dolphins, a novel vision-language model architected to imbibe human-like abilities as a conversational driving assistant.
Dolphins is adept at processing multimodal inputs comprising video (or image) data, text instructions, and historical control signals.
- Score: 42.14069594700448
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The quest for fully autonomous vehicles (AVs) capable of navigating complex
real-world scenarios with human-like understanding and responsiveness. In this
paper, we introduce Dolphins, a novel vision-language model architected to
imbibe human-like abilities as a conversational driving assistant. Dolphins is
adept at processing multimodal inputs comprising video (or image) data, text
instructions, and historical control signals to generate informed outputs
corresponding to the provided instructions. Building upon the open-sourced
pretrained Vision-Language Model, OpenFlamingo, we first enhance Dolphins's
reasoning capabilities through an innovative Grounded Chain of Thought (GCoT)
process. Then we tailored Dolphins to the driving domain by constructing
driving-specific instruction data and conducting instruction tuning. Through
the utilization of the BDD-X dataset, we designed and consolidated four
distinct AV tasks into Dolphins to foster a holistic understanding of intricate
driving scenarios. As a result, the distinctive features of Dolphins are
characterized into two dimensions: (1) the ability to provide a comprehensive
understanding of complex and long-tailed open-world driving scenarios and solve
a spectrum of AV tasks, and (2) the emergence of human-like capabilities
including gradient-free instant adaptation via in-context learning and error
recovery via reflection.
Related papers
- Human Insights Driven Latent Space for Different Driving Perspectives: A Unified Encoder for Efficient Multi-Task Inference [43.474068248379815]
We propose a shared encoder trained on multiple computer vision tasks critical for urban navigation.
We introduce a multi-scale feature network for pose estimation to improve depth learning.
Our findings demonstrate that a shared backbone trained on diverse visual tasks is capable of providing overall perception capabilities.
arXiv Detail & Related papers (2024-09-16T08:54:03Z) - SimpleLLM4AD: An End-to-End Vision-Language Model with Graph Visual Question Answering for Autonomous Driving [15.551625571158056]
We propose an e2eAD method called SimpleLLM4AD.
In our method, the e2eAD task are divided into four stages, which are perception, prediction, planning, and behavior.
Our experiments demonstrate that SimpleLLM4AD achieves competitive performance in complex driving scenarios.
arXiv Detail & Related papers (2024-07-31T02:35:33Z) - Delving into Multi-modal Multi-task Foundation Models for Road Scene Understanding: From Learning Paradigm Perspectives [56.2139730920855]
We present a systematic analysis of MM-VUFMs specifically designed for road scenes.
Our objective is to provide a comprehensive overview of common practices, referring to task-specific models, unified multi-modal models, unified multi-task models, and foundation model prompting techniques.
We provide insights into key challenges and future trends, such as closed-loop driving systems, interpretability, embodied driving agents, and world models.
arXiv Detail & Related papers (2024-02-05T12:47:09Z) - VLP: Vision Language Planning for Autonomous Driving [52.640371249017335]
This paper presents a novel Vision-Language-Planning framework that exploits language models to bridge the gap between linguistic understanding and autonomous driving.
It achieves state-of-the-art end-to-end planning performance on the NuScenes dataset by achieving 35.9% and 60.5% reduction in terms of average L2 error and collision rates, respectively.
arXiv Detail & Related papers (2024-01-10T23:00:40Z) - Lana: A Language-Capable Navigator for Instruction Following and
Generation [70.76686546473994]
LANA is a language-capable navigation agent which is able to execute human-written navigation commands and provide route descriptions to humans.
We empirically verify that, compared with recent advanced task-specific solutions, LANA attains better performances on both instruction following and route description.
In addition, endowed with language generation capability, LANA can explain to humans its behaviors and assist human's wayfinding.
arXiv Detail & Related papers (2023-03-15T07:21:28Z) - Policy Pre-training for End-to-end Autonomous Driving via
Self-supervised Geometric Modeling [96.31941517446859]
We propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving.
We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos.
In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input.
In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only.
arXiv Detail & Related papers (2023-01-03T08:52:49Z) - DOLPHINS: Dataset for Collaborative Perception enabled Harmonious and
Interconnected Self-driving [19.66714697653504]
Vehicle-to-Everything (V2X) network has enabled collaborative perception in autonomous driving.
The lack of datasets has severely blocked the development of collaborative perception algorithms.
We release DOLPHINS: dataset for cOllaborative Perception enabled Harmonious and INterconnected Self-driving.
arXiv Detail & Related papers (2022-07-15T17:07:07Z) - Generative Adversarial Imitation Learning for End-to-End Autonomous
Driving on Urban Environments [0.8122270502556374]
Generative Adversarial Imitation Learning (GAIL) can train policies without explicitly requiring to define a reward function.
We show that both of them are capable of imitating the expert trajectory from start to end after training ends.
arXiv Detail & Related papers (2021-10-16T15:04:13Z) - Episodic Transformer for Vision-and-Language Navigation [142.6236659368177]
This paper focuses on addressing two challenges: handling long sequence of subtasks, and understanding complex human instructions.
We propose Episodic Transformer (E.T.), a multimodal transformer that encodes language inputs and the full episode history of visual observations and actions.
Our approach sets a new state of the art on the challenging ALFRED benchmark, achieving 38.4% and 8.5% task success rates on seen and unseen test splits.
arXiv Detail & Related papers (2021-05-13T17:51:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.