Robustness Verification for Knowledge-Based Logic of Risky Driving
Scenes
- URL: http://arxiv.org/abs/2312.16364v1
- Date: Wed, 27 Dec 2023 00:13:51 GMT
- Title: Robustness Verification for Knowledge-Based Logic of Risky Driving
Scenes
- Authors: Xia Wang, Anda Liang, Jonathan Sprinkle and Taylor T. Johnson
- Abstract summary: We collect 72 accident datasets from Data.gov and organize them by state.
We train Decision Tree and XGBoost models on each state's dataset, deriving accident judgment logic.
- Score: 8.388107085036571
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many decision-making scenarios in modern life benefit from the decision
support of artificial intelligence algorithms, which focus on a data-driven
philosophy and automated programs or systems. However, crucial decision issues
related to security, fairness, and privacy should consider more human knowledge
and principles to supervise such AI algorithms to reach more proper solutions
and to benefit society more effectively. In this work, we extract
knowledge-based logic that defines risky driving formats learned from public
transportation accident datasets, which haven't been analyzed in detail to the
best of our knowledge. More importantly, this knowledge is critical for
recognizing traffic hazards and could supervise and improve AI models in
safety-critical systems. Then we use automated verification methods to verify
the robustness of such logic. More specifically, we gather 72 accident datasets
from Data.gov and organize them by state. Further, we train Decision Tree and
XGBoost models on each state's dataset, deriving accident judgment logic.
Finally, we deploy robustness verification on these tree-based models under
multiple parameter combinations.
Related papers
- Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Improving Explainable Object-induced Model through Uncertainty for
Automated Vehicles [13.514721609660521]
Recent explainable automated vehicles (AVs) neglect crucial information related to inherent uncertainties while providing explanations for actions.
This study builds upon the "object-induced" model approach that prioritizes the role of objects in scenes for decision-making.
We also explore several advanced training strategies guided by uncertainty, including uncertainty-guided data reweighting and augmentation.
arXiv Detail & Related papers (2024-02-23T19:14:57Z) - Inherent Diverse Redundant Safety Mechanisms for AI-based Software
Elements in Automotive Applications [1.6495054381576084]
This paper explores the role and challenges of Artificial Intelligence (AI) algorithms in autonomous driving systems.
A primary concern relates to the ability (and necessity) of AI models to generalize beyond their initial training data.
This paper investigates the risk associated with overconfident AI models in safety-critical applications like autonomous driving.
arXiv Detail & Related papers (2024-02-13T04:15:26Z) - From DDMs to DNNs: Using process data and models of decision-making to
improve human-AI interactions [1.1510009152620668]
We argue that artificial intelligence (AI) research would benefit from a stronger focus on insights about how decisions emerge over time.
First, we introduce a highly established computational framework that assumes decisions to emerge from the noisy accumulation of evidence.
Next, we discuss to what extent current approaches in multi-agent AI do or do not incorporate process data and models of decision making.
arXiv Detail & Related papers (2023-08-29T11:27:22Z) - A Study of Situational Reasoning for Traffic Understanding [63.45021731775964]
We devise three novel text-based tasks for situational reasoning in the traffic domain.
We adopt four knowledge-enhanced methods that have shown generalization capability across language reasoning tasks in prior work.
We provide in-depth analyses of model performance on data partitions and examine model predictions categorically.
arXiv Detail & Related papers (2023-06-05T01:01:12Z) - Unsupervised Self-Driving Attention Prediction via Uncertainty Mining
and Knowledge Embedding [51.8579160500354]
We propose an unsupervised way to predict self-driving attention by uncertainty modeling and driving knowledge integration.
Results show equivalent or even more impressive performance compared to fully-supervised state-of-the-art approaches.
arXiv Detail & Related papers (2023-03-17T00:28:33Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z) - On Safety Assessment of Artificial Intelligence [0.0]
We show that many models of artificial intelligence, in particular machine learning, are statistical models.
Part of the budget of dangerous random failures for the relevant safety integrity level needs to be used for the probabilistic faulty behavior of the AI system.
We propose a research challenge that may be decisive for the use of AI in safety related systems.
arXiv Detail & Related papers (2020-02-29T14:05:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.