A taxonomic system for failure cause analysis of open source AI
incidents
- URL: http://arxiv.org/abs/2211.07280v1
- Date: Mon, 14 Nov 2022 11:21:30 GMT
- Title: A taxonomic system for failure cause analysis of open source AI
incidents
- Authors: Nikiforos Pittaras, Sean McGregor
- Abstract summary: This work demonstrates how to apply expert knowledge on the population of incidents in the AI Incident Database (AIID) to infer potential and likely technical causative factors that contribute to reported failures and harms.
We present early work on a taxonomic system that covers a cascade of interrelated incident factors, from system goals (nearly always known) to methods / technologies (knowable in many cases) and technical failure causes (subject to expert analysis) of the implicated systems.
- Score: 6.85316573653194
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While certain industrial sectors (e.g., aviation) have a long history of
mandatory incident reporting complete with analytical findings, the practice of
artificial intelligence (AI) safety benefits from no such mandate and thus
analyses must be performed on publicly known ``open source'' AI incidents.
Although the exact causes of AI incidents are seldom known by outsiders, this
work demonstrates how to apply expert knowledge on the population of incidents
in the AI Incident Database (AIID) to infer the potential and likely technical
causative factors that contribute to reported failures and harms. We present
early work on a taxonomic system that covers a cascade of interrelated incident
factors, from system goals (nearly always known) to methods / technologies
(knowable in many cases) and technical failure causes (subject to expert
analysis) of the implicated systems. We pair this ontology structure with a
comprehensive classification workflow that leverages expert knowledge and
community feedback, resulting in taxonomic annotations grounded by incident
data and human expertise.
Related papers
- From Silos to Systems: Process-Oriented Hazard Analysis for AI Systems [2.226040060318401]
We translate System Theoretic Process Analysis (STPA) for analyzing AI operation and development processes.
We focus on systems that rely on machine learning algorithms and conductedA on three case studies.
We find that key concepts and steps of conducting anA readily apply, albeit with a few adaptations tailored for AI systems.
arXiv Detail & Related papers (2024-10-29T20:43:18Z) - Lessons for Editors of AI Incidents from the AI Incident Database [2.5165775267615205]
The AI Incident Database (AIID) is a project that catalogs AI incidents and supports further research by providing a platform to classify incidents.
This study reviews the AIID's dataset of 750+ AI incidents and two independent ambiguities applied to these incidents to identify common challenges to indexing and analyzing AI incidents.
We report mitigations to make incident processes more robust to uncertainty related to cause, extent of harm, severity, or technical details of implicated systems.
arXiv Detail & Related papers (2024-09-24T19:46:58Z) - Risks and NLP Design: A Case Study on Procedural Document QA [52.557503571760215]
We argue that clearer assessments of risks and harms to users will be possible when we specialize the analysis to more concrete applications and their plausible users.
We conduct a risk-oriented error analysis that could then inform the design of a future system to be deployed with lower risk of harm and better performance.
arXiv Detail & Related papers (2024-08-16T17:23:43Z) - A Survey on Failure Analysis and Fault Injection in AI Systems [28.30817443151044]
The complexity of AI systems has exposed their vulnerabilities, necessitating robust methods for failure analysis (FA) and fault injection (FI) to ensure resilience and reliability.
This study fills this gap by presenting a detailed survey of existing FA and FI approaches across six layers of AI systems.
Our findings reveal a taxonomy of AI system failures, assess the capabilities of existing FI tools, and highlight discrepancies between real-world and simulated failures.
arXiv Detail & Related papers (2024-06-28T00:32:03Z) - Progressing from Anomaly Detection to Automated Log Labeling and
Pioneering Root Cause Analysis [53.24804865821692]
This study introduces a taxonomy for log anomalies and explores automated data labeling to mitigate labeling challenges.
The study envisions a future where root cause analysis follows anomaly detection, unraveling the underlying triggers of anomalies.
arXiv Detail & Related papers (2023-12-22T15:04:20Z) - A Framework for Exploring the Consequences of AI-Mediated Enterprise Knowledge Access and Identifying Risks to Workers [3.4568218861862556]
This paper presents the Consequence-Mechanism-Risk framework to identify risks to workers from AI-mediated enterprise knowledge access systems.
We have drawn on wide-ranging literature detailing risks to workers, and categorised risks as being to worker value, power, and wellbeing.
Future work could apply this framework to other technological systems to promote the protection of workers and other groups.
arXiv Detail & Related papers (2023-12-08T17:05:40Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Liability regimes in the age of AI: a use-case driven analysis of the
burden of proof [1.7510020208193926]
New emerging technologies powered by Artificial Intelligence (AI) have the potential to disruptively transform our societies for the better.
But there is growing concerns about certain intrinsic characteristics of these methodologies that carry potential risks to both safety and fundamental rights.
This paper presents three case studies, as well as the methodology to reach them, that illustrate these difficulties.
arXiv Detail & Related papers (2022-11-03T13:55:36Z) - Mining Root Cause Knowledge from Cloud Service Incident Investigations
for AIOps [71.12026848664753]
Root Cause Analysis (RCA) of any service-disrupting incident is one of the most critical as well as complex tasks in IT processes.
In this work, we present ICA and the downstream Incident Search and Retrieval based RCA pipeline, built at Salesforce.
arXiv Detail & Related papers (2022-04-21T02:33:34Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.