Reliability Analysis of Artificial Intelligence Systems Using Recurrent
Events Data from Autonomous Vehicles
- URL: http://arxiv.org/abs/2102.01740v1
- Date: Tue, 2 Feb 2021 20:25:23 GMT
- Title: Reliability Analysis of Artificial Intelligence Systems Using Recurrent
Events Data from Autonomous Vehicles
- Authors: Yili Hong and Jie Min and Caleb B. King and William Q. Meeker
- Abstract summary: We use recurrent disengagement events as a representation of the reliability of the AI system in autonomous vehicles.
We propose a new nonparametric model based on monotonic splines to describe the event process.
- Score: 2.7515565752659645
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial intelligence (AI) systems have become increasingly common and the
trend will continue. Examples of AI systems include autonomous vehicles (AV),
computer vision, natural language processing, and AI medical experts. To allow
for safe and effective deployment of AI systems, the reliability of such
systems needs to be assessed. Traditionally, reliability assessment is based on
reliability test data and the subsequent statistical modeling and analysis. The
availability of reliability data for AI systems, however, is limited because
such data are typically sensitive and proprietary. The California Department of
Motor Vehicles (DMV) oversees and regulates an AV testing program, in which
many AV manufacturers are conducting AV road tests. Manufacturers participating
in the program are required to report recurrent disengagement events to
California DMV. This information is being made available to the public. In this
paper, we use recurrent disengagement events as a representation of the
reliability of the AI system in AV, and propose a statistical framework for
modeling and analyzing the recurrent events data from AV driving tests. We use
traditional parametric models in software reliability and propose a new
nonparametric model based on monotonic splines to describe the event process.
We develop inference procedures for selecting the best models, quantifying
uncertainty, and testing heterogeneity in the event process. We then analyze
the recurrent events data from four AV manufacturers, and make inferences on
the reliability of the AI systems in AV. We also describe how the proposed
analysis can be applied to assess the reliability of other AI systems.
Related papers
- XAI-based Feature Ensemble for Enhanced Anomaly Detection in Autonomous Driving Systems [1.3022753212679383]
This paper proposes a novel feature ensemble framework that integrates multiple Explainable AI (XAI) methods.
By fusing top features identified by these XAI methods across six diverse AI models, the framework creates a robust and comprehensive set of features critical for detecting anomalies.
Our technique demonstrates improved accuracy, robustness, and transparency of AI models, contributing to safer and more trustworthy autonomous driving systems.
arXiv Detail & Related papers (2024-10-20T14:34:48Z) - Generative Diffusion-based Contract Design for Efficient AI Twins Migration in Vehicular Embodied AI Networks [55.15079732226397]
Embodied AI is a rapidly advancing field that bridges the gap between cyberspace and physical space.
In VEANET, embodied AI twins act as in-vehicle AI assistants to perform diverse tasks supporting autonomous driving.
arXiv Detail & Related papers (2024-10-02T02:20:42Z) - AIDE: An Automatic Data Engine for Object Detection in Autonomous Driving [68.73885845181242]
We propose an Automatic Data Engine (AIDE) that automatically identifies issues, efficiently curates data, improves the model through auto-labeling, and verifies the model through generation of diverse scenarios.
We further establish a benchmark for open-world detection on AV datasets to comprehensively evaluate various learning paradigms, demonstrating our method's superior performance at a reduced cost.
arXiv Detail & Related papers (2024-03-26T04:27:56Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - Planning Reliability Assurance Tests for Autonomous Vehicles [5.590179847470922]
One important application of AI technology is the development of autonomous vehicles (AV)
To plan for an assurance test, one needs to determine how many AVs need to be tested for how many miles and the standard for passing the test.
This paper develops statistical methods for planning AV reliability assurance tests based on recurrent events data.
arXiv Detail & Related papers (2023-11-30T20:48:20Z) - PEM: Perception Error Model for Virtual Testing of Autonomous Vehicles [20.300846259643137]
We define Perception Error Models (PEM) in this article.
PEM is a virtual simulation component that can enable the analysis of the impact of perception errors on AV safety.
We demonstrate the usefulness of PEM-based virtual tests, by evaluating camera, LiDAR, and camera-LiDAR setups.
arXiv Detail & Related papers (2023-02-23T10:54:36Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - Statistical Perspectives on Reliability of Artificial Intelligence
Systems [6.284088451820049]
We provide statistical perspectives on the reliability of AI systems.
We introduce a so-called SMART statistical framework for AI reliability research.
We discuss recent developments in modeling and analysis of AI reliability.
arXiv Detail & Related papers (2021-11-09T20:00:14Z) - Disengagement Cause-and-Effect Relationships Extraction Using an NLP
Pipeline [14.708195642446716]
The California Department of Motor Vehicles (CA DMV) has launched the Autonomous Vehicle Tester Program.
The program collects and releases reports related to Autonomous Vehicle Disengagement (AVD) from autonomous driving.
This study serves as a successful practice of deep transfer learning using pre-trained models and generates a consolidated disengagement database.
arXiv Detail & Related papers (2021-11-05T14:00:59Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.