Towards Trustworthy Automated Driving through Qualitative Scene Understanding and Explanations
- URL: http://arxiv.org/abs/2403.16908v1
- Date: Mon, 25 Mar 2024 16:19:33 GMT
- Title: Towards Trustworthy Automated Driving through Qualitative Scene Understanding and Explanations
- Authors: Nassim Belmecheri, Arnaud Gotlieb, Nadjib Lazaar, Helge Spieker,
- Abstract summary: The qualitative explainable graph (QXG) is a unified symbolic and qualitative representation for scene understanding in urban mobility.
QXG can be constructed in real-time, making it a versatile tool for in-vehicle explanations across various sensor types.
These explanations can serve diverse purposes, from informing passengers and trustworthy users to enabling post-hoc analysis of prior behaviors.
- Score: 15.836913530330786
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding driving scenes and communicating automated vehicle decisions are key requirements for trustworthy automated driving. In this article, we introduce the Qualitative Explainable Graph (QXG), which is a unified symbolic and qualitative representation for scene understanding in urban mobility. The QXG enables interpreting an automated vehicle's environment using sensor data and machine learning models. It utilizes spatio-temporal graphs and qualitative constraints to extract scene semantics from raw sensor inputs, such as LiDAR and camera data, offering an interpretable scene model. A QXG can be incrementally constructed in real-time, making it a versatile tool for in-vehicle explanations across various sensor types. Our research showcases the potential of QXG, particularly in the context of automated driving, where it can rationalize decisions by linking the graph with observed actions. These explanations can serve diverse purposes, from informing passengers and alerting vulnerable road users to enabling post-hoc analysis of prior behaviors.
Related papers
- Automatic Odometry-Less OpenDRIVE Generation From Sparse Point Clouds [1.3351610617039973]
High-resolution road representations are a key factor for the success of automated driving functions.
This paper proposes a novel approach to generate realistic road representations based solely on point cloud information.
arXiv Detail & Related papers (2024-05-13T08:26:24Z) - Hybrid Reasoning Based on Large Language Models for Autonomous Car Driving [14.64475022650084]
Large Language Models (LLMs) have garnered significant attention for their ability to understand text and images, generate human-like text, and perform complex reasoning tasks.
We investigate how well LLMs can adapt and apply a combination of arithmetic and common-sense reasoning, particularly in autonomous driving scenarios.
arXiv Detail & Related papers (2024-02-21T08:09:05Z) - Trustworthy Automated Driving through Qualitative Scene Understanding and Explanations [15.836913530330786]
We present the Qualitative Explainable Graph (QXG), a unified symbolic and qualitative representation for scene understanding in urban mobility.
QXG enables the interpretation of an automated vehicle's environment using sensor data and machine learning models.
It can be incrementally constructed in real-time, making it a versatile tool for in-vehicle explanations and real-time decision-making.
arXiv Detail & Related papers (2024-01-29T11:20:19Z) - DriveLM: Driving with Graph Visual Question Answering [57.51930417790141]
We study how vision-language models (VLMs) trained on web-scale data can be integrated into end-to-end driving systems.
We propose a VLM-based baseline approach (DriveLM-Agent) for jointly performing Graph VQA and end-to-end driving.
arXiv Detail & Related papers (2023-12-21T18:59:12Z) - Acquiring Qualitative Explainable Graphs for Automated Driving Scene
Interpretation [17.300690315775576]
The future of automated driving (AD) is rooted in the development of robust, fair and explainable intelligence artificial methods.
This paper proposes a novel representation of AD scenes, called qualitative eXplainable Graph (QXG), dedicated to qualitative reasoning of long-term scenes.
Our experimental results on NuScenes, an open real-world multi-modal dataset, show that the qualitative eXplainable graph of an AD scene composed of 40 frames can be computed in real-time and light in space storage.
arXiv Detail & Related papers (2023-08-24T13:01:46Z) - Exploring Contextual Representation and Multi-Modality for End-to-End
Autonomous Driving [58.879758550901364]
Recent perception systems enhance spatial understanding with sensor fusion but often lack full environmental context.
We introduce a framework that integrates three cameras to emulate the human field of view, coupled with top-down bird-eye-view semantic data to enhance contextual representation.
Our method achieves displacement error by 0.67m in open-loop settings, surpassing current methods by 6.9% on the nuScenes dataset.
arXiv Detail & Related papers (2022-10-13T05:56:20Z) - RSG-Net: Towards Rich Sematic Relationship Prediction for Intelligent
Vehicle in Complex Environments [72.04891523115535]
We propose RSG-Net (Road Scene Graph Net): a graph convolutional network designed to predict potential semantic relationships from object proposals.
The experimental results indicate that this network, trained on Road Scene Graph dataset, could efficiently predict potential semantic relationships among objects around the ego-vehicle.
arXiv Detail & Related papers (2022-07-16T12:40:17Z) - IntentNet: Learning to Predict Intention from Raw Sensor Data [86.74403297781039]
In this paper, we develop a one-stage detector and forecaster that exploits both 3D point clouds produced by a LiDAR sensor as well as dynamic maps of the environment.
Our multi-task model achieves better accuracy than the respective separate modules while saving computation, which is critical to reducing reaction time in self-driving applications.
arXiv Detail & Related papers (2021-01-20T00:31:52Z) - SceneGen: Learning to Generate Realistic Traffic Scenes [92.98412203941912]
We present SceneGen, a neural autoregressive model of traffic scenes that eschews the need for rules and distributions.
We demonstrate SceneGen's ability to faithfully model distributions of real traffic scenes.
arXiv Detail & Related papers (2021-01-16T22:51:43Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - Scenario-Transferable Semantic Graph Reasoning for Interaction-Aware
Probabilistic Prediction [29.623692599892365]
Accurately predicting the possible behaviors of traffic participants is an essential capability for autonomous vehicles.
We propose a novel generic representation for various driving environments by taking the advantage of semantics and domain knowledge.
arXiv Detail & Related papers (2020-04-07T00:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.