Trustworthy Automated Driving through Qualitative Scene Understanding and Explanations
- URL: http://arxiv.org/abs/2403.09668v1
- Date: Mon, 29 Jan 2024 11:20:19 GMT
- Title: Trustworthy Automated Driving through Qualitative Scene Understanding and Explanations
- Authors: Nassim Belmecheri, Arnaud Gotlieb, Nadjib Lazaar, Helge Spieker,
- Abstract summary: We present the Qualitative Explainable Graph (QXG), a unified symbolic and qualitative representation for scene understanding in urban mobility.
QXG enables the interpretation of an automated vehicle's environment using sensor data and machine learning models.
It can be incrementally constructed in real-time, making it a versatile tool for in-vehicle explanations and real-time decision-making.
- Score: 15.836913530330786
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present the Qualitative Explainable Graph (QXG): a unified symbolic and qualitative representation for scene understanding in urban mobility. QXG enables the interpretation of an automated vehicle's environment using sensor data and machine learning models. It leverages spatio-temporal graphs and qualitative constraints to extract scene semantics from raw sensor inputs, such as LiDAR and camera data, offering an intelligible scene model. Crucially, QXG can be incrementally constructed in real-time, making it a versatile tool for in-vehicle explanations and real-time decision-making across various sensor types. Our research showcases the transformative potential of QXG, particularly in the context of automated driving, where it elucidates decision rationales by linking the graph with vehicle actions. These explanations serve diverse purposes, from informing passengers and alerting vulnerable road users (VRUs) to enabling post-analysis of prior behaviours.
Related papers
- Automatic Odometry-Less OpenDRIVE Generation From Sparse Point Clouds [1.3351610617039973]
High-resolution road representations are a key factor for the success of automated driving functions.
This paper proposes a novel approach to generate realistic road representations based solely on point cloud information.
arXiv Detail & Related papers (2024-05-13T08:26:24Z) - Towards Trustworthy Automated Driving through Qualitative Scene Understanding and Explanations [15.836913530330786]
The qualitative explainable graph (QXG) is a unified symbolic and qualitative representation for scene understanding in urban mobility.
QXG can be constructed in real-time, making it a versatile tool for in-vehicle explanations across various sensor types.
These explanations can serve diverse purposes, from informing passengers and trustworthy users to enabling post-hoc analysis of prior behaviors.
arXiv Detail & Related papers (2024-03-25T16:19:33Z) - Acquiring Qualitative Explainable Graphs for Automated Driving Scene
Interpretation [17.300690315775576]
The future of automated driving (AD) is rooted in the development of robust, fair and explainable intelligence artificial methods.
This paper proposes a novel representation of AD scenes, called qualitative eXplainable Graph (QXG), dedicated to qualitative reasoning of long-term scenes.
Our experimental results on NuScenes, an open real-world multi-modal dataset, show that the qualitative eXplainable graph of an AD scene composed of 40 frames can be computed in real-time and light in space storage.
arXiv Detail & Related papers (2023-08-24T13:01:46Z) - Context-Aware Timewise VAEs for Real-Time Vehicle Trajectory Prediction [4.640835690336652]
We present ContextVAE, a context-aware approach for multi-modal vehicle trajectory prediction.
Our approach takes into account both the social features exhibited by agents on the scene and the physical environment constraints.
In all tested datasets, ContextVAE models are fast to train and provide high-quality multi-modal predictions in real-time.
arXiv Detail & Related papers (2023-02-21T18:42:24Z) - Exploring Contextual Representation and Multi-Modality for End-to-End
Autonomous Driving [58.879758550901364]
Recent perception systems enhance spatial understanding with sensor fusion but often lack full environmental context.
We introduce a framework that integrates three cameras to emulate the human field of view, coupled with top-down bird-eye-view semantic data to enhance contextual representation.
Our method achieves displacement error by 0.67m in open-loop settings, surpassing current methods by 6.9% on the nuScenes dataset.
arXiv Detail & Related papers (2022-10-13T05:56:20Z) - RSG-Net: Towards Rich Sematic Relationship Prediction for Intelligent
Vehicle in Complex Environments [72.04891523115535]
We propose RSG-Net (Road Scene Graph Net): a graph convolutional network designed to predict potential semantic relationships from object proposals.
The experimental results indicate that this network, trained on Road Scene Graph dataset, could efficiently predict potential semantic relationships among objects around the ego-vehicle.
arXiv Detail & Related papers (2022-07-16T12:40:17Z) - Cycle and Semantic Consistent Adversarial Domain Adaptation for Reducing
Simulation-to-Real Domain Shift in LiDAR Bird's Eye View [110.83289076967895]
We present a BEV domain adaptation method based on CycleGAN that uses prior semantic classification in order to preserve the information of small objects of interest during the domain adaptation process.
The quality of the generated BEVs has been evaluated using a state-of-the-art 3D object detection framework at KITTI 3D Object Detection Benchmark.
arXiv Detail & Related papers (2021-04-22T12:47:37Z) - SCOUT: Socially-COnsistent and UndersTandable Graph Attention Network
for Trajectory Prediction of Vehicles and VRUs [0.0]
SCOUT is a novel Attention-based Graph Neural Network that uses a flexible and generic representation of the scene as a graph.
We explore three different attention mechanisms and test our scheme with both bird-eye-view and on-vehicle urban data.
We evaluate our model's flexibility and transferability by testing it under completely new scenarios on RounD dataset.
arXiv Detail & Related papers (2021-02-12T06:29:28Z) - IntentNet: Learning to Predict Intention from Raw Sensor Data [86.74403297781039]
In this paper, we develop a one-stage detector and forecaster that exploits both 3D point clouds produced by a LiDAR sensor as well as dynamic maps of the environment.
Our multi-task model achieves better accuracy than the respective separate modules while saving computation, which is critical to reducing reaction time in self-driving applications.
arXiv Detail & Related papers (2021-01-20T00:31:52Z) - SceneGen: Learning to Generate Realistic Traffic Scenes [92.98412203941912]
We present SceneGen, a neural autoregressive model of traffic scenes that eschews the need for rules and distributions.
We demonstrate SceneGen's ability to faithfully model distributions of real traffic scenes.
arXiv Detail & Related papers (2021-01-16T22:51:43Z) - Implicit Latent Variable Model for Scene-Consistent Motion Forecasting [78.74510891099395]
In this paper, we aim to learn scene-consistent motion forecasts of complex urban traffic directly from sensor data.
We model the scene as an interaction graph and employ powerful graph neural networks to learn a distributed latent representation of the scene.
arXiv Detail & Related papers (2020-07-23T14:31:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.