Preventing Repeated Real World AI Failures by Cataloging Incidents: The
AI Incident Database
- URL: http://arxiv.org/abs/2011.08512v1
- Date: Tue, 17 Nov 2020 08:55:14 GMT
- Title: Preventing Repeated Real World AI Failures by Cataloging Incidents: The
AI Incident Database
- Authors: Sean McGregor
- Abstract summary: The AI Incident Database is an incident collection initiated by an industrial/non-profit cooperative to enable AI incident avoidance and mitigation.
The database supports a variety of research and development use cases with faceted and full text search on more than 1,000 incident reports archived to date.
- Score: 6.85316573653194
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Mature industrial sectors (e.g., aviation) collect their real world failures
in incident databases to inform safety improvements. Intelligent systems
currently cause real world harms without a collective memory of their failings.
As a result, companies repeatedly make the same mistakes in the design,
development, and deployment of intelligent systems. A collection of intelligent
system failures experienced in the real world (i.e., incidents) is needed to
ensure intelligent systems benefit people and society. The AI Incident Database
is an incident collection initiated by an industrial/non-profit cooperative to
enable AI incident avoidance and mitigation. The database supports a variety of
research and development use cases with faceted and full text search on more
than 1,000 incident reports archived to date.
Related papers
- Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Lessons for Editors of AI Incidents from the AI Incident Database [2.5165775267615205]
The AI Incident Database (AIID) is a project that catalogs AI incidents and supports further research by providing a platform to classify incidents.
This study reviews the AIID's dataset of 750+ AI incidents and two independent ambiguities applied to these incidents to identify common challenges to indexing and analyzing AI incidents.
We report mitigations to make incident processes more robust to uncertainty related to cause, extent of harm, severity, or technical details of implicated systems.
arXiv Detail & Related papers (2024-09-24T19:46:58Z) - Concrete Problems in AI Safety, Revisited [1.4089652912597792]
As AI systems proliferate in society, the AI community is increasingly preoccupied with the concept of AI Safety.
We demonstrate through an analysis of real world cases of such incidents that although current vocabulary captures a range of the encountered issues of AI deployment, an expanded socio-technical framing will be required.
arXiv Detail & Related papers (2023-12-18T23:38:05Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Indexing AI Risks with Incidents, Issues, and Variants [5.8010446129208155]
backlog of "issues" that do not meet database's incident ingestion criteria have accumulated.
Similar to databases in aviation and computer security, the AIID proposes to adopt a two-tiered system for indexing AI incidents.
arXiv Detail & Related papers (2022-11-18T17:32:19Z) - A taxonomic system for failure cause analysis of open source AI
incidents [6.85316573653194]
This work demonstrates how to apply expert knowledge on the population of incidents in the AI Incident Database (AIID) to infer potential and likely technical causative factors that contribute to reported failures and harms.
We present early work on a taxonomic system that covers a cascade of interrelated incident factors, from system goals (nearly always known) to methods / technologies (knowable in many cases) and technical failure causes (subject to expert analysis) of the implicated systems.
arXiv Detail & Related papers (2022-11-14T11:21:30Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Understanding and Avoiding AI Failures: A Practical Guide [0.6526824510982799]
We create a framework for understanding the risks associated with AI applications.
We also use AI safety principles to quantify the unique risks of increased intelligence and human-like qualities in AI.
arXiv Detail & Related papers (2021-04-22T17:05:27Z) - Empowering Things with Intelligence: A Survey of the Progress,
Challenges, and Opportunities in Artificial Intelligence of Things [98.10037444792444]
We show how AI can empower the IoT to make it faster, smarter, greener, and safer.
First, we present progress in AI research for IoT from four perspectives: perceiving, learning, reasoning, and behaving.
Finally, we summarize some promising applications of AIoT that are likely to profoundly reshape our world.
arXiv Detail & Related papers (2020-11-17T13:14:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.