Improving Explainable Object-induced Model through Uncertainty for
Automated Vehicles
- URL: http://arxiv.org/abs/2402.15572v1
- Date: Fri, 23 Feb 2024 19:14:57 GMT
- Title: Improving Explainable Object-induced Model through Uncertainty for
Automated Vehicles
- Authors: Shihong Ling, Yue Wan, Xiaowei Jia, Na Du
- Abstract summary: Recent explainable automated vehicles (AVs) neglect crucial information related to inherent uncertainties while providing explanations for actions.
This study builds upon the "object-induced" model approach that prioritizes the role of objects in scenes for decision-making.
We also explore several advanced training strategies guided by uncertainty, including uncertainty-guided data reweighting and augmentation.
- Score: 13.514721609660521
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid evolution of automated vehicles (AVs) has the potential to provide
safer, more efficient, and comfortable travel options. However, these systems
face challenges regarding reliability in complex driving scenarios. Recent
explainable AV architectures neglect crucial information related to inherent
uncertainties while providing explanations for actions. To overcome such
challenges, our study builds upon the "object-induced" model approach that
prioritizes the role of objects in scenes for decision-making and integrates
uncertainty assessment into the decision-making process using an evidential
deep learning paradigm with a Beta prior. Additionally, we explore several
advanced training strategies guided by uncertainty, including
uncertainty-guided data reweighting and augmentation. Leveraging the BDD-OIA
dataset, our findings underscore that the model, through these enhancements,
not only offers a clearer comprehension of AV decisions and their underlying
reasoning but also surpasses existing baselines across a broad range of
scenarios.
Related papers
- Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.
We propose methods tailored to the unique properties of perception and decision-making.
We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - AIDE: An Automatic Data Engine for Object Detection in Autonomous Driving [68.73885845181242]
We propose an Automatic Data Engine (AIDE) that automatically identifies issues, efficiently curates data, improves the model through auto-labeling, and verifies the model through generation of diverse scenarios.
We further establish a benchmark for open-world detection on AV datasets to comprehensively evaluate various learning paradigms, demonstrating our method's superior performance at a reduced cost.
arXiv Detail & Related papers (2024-03-26T04:27:56Z) - An Explainable Ensemble-based Intrusion Detection System for Software-Defined Vehicle Ad-hoc Networks [0.0]
In this study, we explore the detection of cyber threats in vehicle networks through ensemble-based machine learning.
We propose a model that uses Random Forest and CatBoost as our main investigators, with Logistic Regression used to then reason on their outputs to make a final decision.
We observe that our approach improves classification accuracy, and results in fewer misclassifications compared to previous works.
arXiv Detail & Related papers (2023-12-08T10:39:18Z) - DRUformer: Enhancing the driving scene Important object detection with
driving relationship self-understanding [50.81809690183755]
Traffic accidents frequently lead to fatal injuries, contributing to over 50 million deaths until 2023.
Previous research primarily assessed the importance of individual participants, treating them as independent entities.
We introduce Driving scene Relationship self-Understanding transformer (DRUformer) to enhance the important object detection task.
arXiv Detail & Related papers (2023-11-11T07:26:47Z) - Safety-Critical Scenario Generation Via Reinforcement Learning Based
Editing [20.99962858782196]
We propose a deep reinforcement learning approach that generates safety-critical scenarios by sequential editing.
Our framework employs a reward function consisting of both risk and plausibility objectives.
Our evaluation demonstrates that the proposed method generates safety-critical scenarios of higher quality compared with previous approaches.
arXiv Detail & Related papers (2023-06-25T05:15:25Z) - Unsupervised Self-Driving Attention Prediction via Uncertainty Mining
and Knowledge Embedding [51.8579160500354]
We propose an unsupervised way to predict self-driving attention by uncertainty modeling and driving knowledge integration.
Results show equivalent or even more impressive performance compared to fully-supervised state-of-the-art approaches.
arXiv Detail & Related papers (2023-03-17T00:28:33Z) - Architectural patterns for handling runtime uncertainty of data-driven
models in safety-critical perception [1.7616042687330642]
We present additional architectural patterns for handling uncertainty estimation.
We evaluate the four patterns qualitatively and quantitatively with respect to safety and performance gains.
We conclude that the consideration of context information of the driving situation makes it possible to accept more or less uncertainty depending on the inherent risk of the situation.
arXiv Detail & Related papers (2022-06-14T13:31:36Z) - Important Object Identification with Semi-Supervised Learning for
Autonomous Driving [37.654878298744855]
We propose a novel approach for important object identification in egocentric driving scenarios.
We present a semi-supervised learning pipeline to enable the model to learn from unlimited unlabeled data.
Our approach also outperforms rule-based baselines by a large margin.
arXiv Detail & Related papers (2022-03-05T01:23:13Z) - UMBRELLA: Uncertainty-Aware Model-Based Offline Reinforcement Learning
Leveraging Planning [1.1339580074756188]
Offline reinforcement learning (RL) provides a framework for learning decision-making from offline data.
Self-driving vehicles (SDV) learn a policy, which potentially even outperforms the behavior in the sub-optimal data set.
This motivates the use of model-based offline RL approaches, which leverage planning.
arXiv Detail & Related papers (2021-11-22T10:37:52Z) - Generalizing Decision Making for Automated Driving with an Invariant
Environment Representation using Deep Reinforcement Learning [55.41644538483948]
Current approaches either do not generalize well beyond the training data or are not capable to consider a variable number of traffic participants.
We propose an invariant environment representation from the perspective of the ego vehicle.
We show that the agents are capable to generalize successfully to unseen scenarios, due to the abstraction.
arXiv Detail & Related papers (2021-02-12T20:37:29Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.