Acquiring Qualitative Explainable Graphs for Automated Driving Scene
Interpretation
- URL: http://arxiv.org/abs/2308.12755v1
- Date: Thu, 24 Aug 2023 13:01:46 GMT
- Title: Acquiring Qualitative Explainable Graphs for Automated Driving Scene
Interpretation
- Authors: Nassim Belmecheri and Arnaud Gotlieb and Nadjib Lazaar and Helge
Spieker
- Abstract summary: The future of automated driving (AD) is rooted in the development of robust, fair and explainable intelligence artificial methods.
This paper proposes a novel representation of AD scenes, called qualitative eXplainable Graph (QXG), dedicated to qualitative reasoning of long-term scenes.
Our experimental results on NuScenes, an open real-world multi-modal dataset, show that the qualitative eXplainable graph of an AD scene composed of 40 frames can be computed in real-time and light in space storage.
- Score: 17.300690315775576
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The future of automated driving (AD) is rooted in the development of robust,
fair and explainable artificial intelligence methods. Upon request, automated
vehicles must be able to explain their decisions to the driver and the car
passengers, to the pedestrians and other vulnerable road users and potentially
to external auditors in case of accidents. However, nowadays, most explainable
methods still rely on quantitative analysis of the AD scene representations
captured by multiple sensors. This paper proposes a novel representation of AD
scenes, called Qualitative eXplainable Graph (QXG), dedicated to qualitative
spatiotemporal reasoning of long-term scenes. The construction of this graph
exploits the recent Qualitative Constraint Acquisition paradigm. Our
experimental results on NuScenes, an open real-world multi-modal dataset, show
that the qualitative eXplainable graph of an AD scene composed of 40 frames can
be computed in real-time and light in space storage which makes it a
potentially interesting tool for improved and more trustworthy perception and
control processes in AD.
Related papers
- Knowledge Graphs of Driving Scenes to Empower the Emerging Capabilities of Neurosymbolic AI [1.6385815610837167]
Neurosymbolic AI is emerging as a powerful approach for tasks spanning from perception to cognition.
There is a lack of widely available real-world benchmark datasets tailored to Neurosymbolic AI tasks.
We introduce DSceneKG -- a suite of knowledge graphs of driving scenes built from real-world, high-quality scenes from open autonomous driving datasets.
arXiv Detail & Related papers (2024-11-05T16:15:33Z) - AIDE: An Automatic Data Engine for Object Detection in Autonomous Driving [68.73885845181242]
We propose an Automatic Data Engine (AIDE) that automatically identifies issues, efficiently curates data, improves the model through auto-labeling, and verifies the model through generation of diverse scenarios.
We further establish a benchmark for open-world detection on AV datasets to comprehensively evaluate various learning paradigms, demonstrating our method's superior performance at a reduced cost.
arXiv Detail & Related papers (2024-03-26T04:27:56Z) - Towards Trustworthy Automated Driving through Qualitative Scene Understanding and Explanations [15.836913530330786]
The qualitative explainable graph (QXG) is a unified symbolic and qualitative representation for scene understanding in urban mobility.
QXG can be constructed in real-time, making it a versatile tool for in-vehicle explanations across various sensor types.
These explanations can serve diverse purposes, from informing passengers and trustworthy users to enabling post-hoc analysis of prior behaviors.
arXiv Detail & Related papers (2024-03-25T16:19:33Z) - Trustworthy Automated Driving through Qualitative Scene Understanding and Explanations [15.836913530330786]
We present the Qualitative Explainable Graph (QXG), a unified symbolic and qualitative representation for scene understanding in urban mobility.
QXG enables the interpretation of an automated vehicle's environment using sensor data and machine learning models.
It can be incrementally constructed in real-time, making it a versatile tool for in-vehicle explanations and real-time decision-making.
arXiv Detail & Related papers (2024-01-29T11:20:19Z) - Exploring the Potential of Multi-Modal AI for Driving Hazard Prediction [18.285227911703977]
We formulate it as a task of anticipating impending accidents using a single input image captured by car dashcams.
The problem needs predicting and reasoning about future events based on uncertain observations.
To enable research in this understudied area, a new dataset named the DHPR dataset is created.
arXiv Detail & Related papers (2023-10-07T03:16:30Z) - Unsupervised Self-Driving Attention Prediction via Uncertainty Mining
and Knowledge Embedding [51.8579160500354]
We propose an unsupervised way to predict self-driving attention by uncertainty modeling and driving knowledge integration.
Results show equivalent or even more impressive performance compared to fully-supervised state-of-the-art approaches.
arXiv Detail & Related papers (2023-03-17T00:28:33Z) - Utilizing Background Knowledge for Robust Reasoning over Traffic
Situations [63.45021731775964]
We focus on a complementary research aspect of Intelligent Transportation: traffic understanding.
We scope our study to text-based methods and datasets given the abundant commonsense knowledge.
We adopt three knowledge-driven approaches for zero-shot QA over traffic situations.
arXiv Detail & Related papers (2022-12-04T09:17:24Z) - SODA10M: Towards Large-Scale Object Detection Benchmark for Autonomous
Driving [94.11868795445798]
We release a Large-Scale Object Detection benchmark for Autonomous driving, named as SODA10M, containing 10 million unlabeled images and 20K images labeled with 6 representative object categories.
To improve diversity, the images are collected every ten seconds per frame within 32 different cities under different weather conditions, periods and location scenes.
We provide extensive experiments and deep analyses of existing supervised state-of-the-art detection models, popular self-supervised and semi-supervised approaches, and some insights about how to develop future models.
arXiv Detail & Related papers (2021-06-21T13:55:57Z) - One Million Scenes for Autonomous Driving: ONCE Dataset [91.94189514073354]
We introduce the ONCE dataset for 3D object detection in the autonomous driving scenario.
The data is selected from 144 driving hours, which is 20x longer than the largest 3D autonomous driving dataset available.
We reproduce and evaluate a variety of self-supervised and semi-supervised methods on the ONCE dataset.
arXiv Detail & Related papers (2021-06-21T12:28:08Z) - Diverse Complexity Measures for Dataset Curation in Self-driving [80.55417232642124]
We propose a new data selection method that exploits a diverse set of criteria that quantize interestingness of traffic scenes.
Our experiments show that the proposed curation pipeline is able to select datasets that lead to better generalization and higher performance.
arXiv Detail & Related papers (2021-01-16T23:45:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.