Standardised schema and taxonomy for AI incident databases in critical digital infrastructure
- URL: http://arxiv.org/abs/2501.17037v1
- Date: Tue, 28 Jan 2025 15:59:01 GMT
- Title: Standardised schema and taxonomy for AI incident databases in critical digital infrastructure
- Authors: Avinash Agarwal, Manisha J. Nene,
- Abstract summary: The rapid deployment of Artificial Intelligence in critical digital infrastructure introduces significant risks.<n>Existing databases lack the granularity as well as the standardized structure required for consistent data collection and analysis.<n>This work proposes a standardized schema and taxonomy for AI incident databases, enabling detailed and structured documentation of AI incidents across sectors.
- Score: 2.209921757303168
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid deployment of Artificial Intelligence (AI) in critical digital infrastructure introduces significant risks, necessitating a robust framework for systematically collecting AI incident data to prevent future incidents. Existing databases lack the granularity as well as the standardized structure required for consistent data collection and analysis, impeding effective incident management. This work proposes a standardized schema and taxonomy for AI incident databases, addressing these challenges by enabling detailed and structured documentation of AI incidents across sectors. Key contributions include developing a unified schema, introducing new fields such as incident severity, causes, and harms caused, and proposing a taxonomy for classifying AI incidents in critical digital infrastructure. The proposed solution facilitates more effective incident data collection and analysis, thus supporting evidence-based policymaking, enhancing industry safety measures, and promoting transparency. This work lays the foundation for a coordinated global response to AI incidents, ensuring trust, safety, and accountability in using AI across regions.
Related papers
- In-House Evaluation Is Not Enough: Towards Robust Third-Party Flaw Disclosure for General-Purpose AI [93.33036653316591]
We call for three interventions to advance system safety.
First, we propose using standardized AI flaw reports and rules of engagement for researchers.
Second, we propose GPAI system providers adopt broadly-scoped flaw disclosure programs.
Third, we advocate for the development of improved infrastructure to coordinate distribution of flaw reports.
arXiv Detail & Related papers (2025-03-21T05:09:46Z) - AILuminate: Introducing v1.0 of the AI Risk and Reliability Benchmark from MLCommons [62.374792825813394]
This paper introduces AILuminate v1.0, the first comprehensive industry-standard benchmark for assessing AI-product risk and reliability.
The benchmark evaluates an AI system's resistance to prompts designed to elicit dangerous, illegal, or undesirable behavior in 12 hazard categories.
arXiv Detail & Related papers (2025-02-19T05:58:52Z) - Computational Safety for Generative AI: A Signal Processing Perspective [65.268245109828]
computational safety is a mathematical framework that enables the quantitative assessment, formulation, and study of safety challenges in GenAI.
We show how sensitivity analysis and loss landscape analysis can be used to detect malicious prompts with jailbreak attempts.
We discuss key open research challenges, opportunities, and the essential role of signal processing in computational AI safety.
arXiv Detail & Related papers (2025-02-18T02:26:50Z) - A Survey on Vulnerability Prioritization: Taxonomy, Metrics, and Research Challenges [20.407534993667607]
Resource constraints necessitate effective vulnerability prioritization strategies.
This paper introduces a novel taxonomy that categorizes metrics into severity, exploitability, contextual factors, predictive indicators, and aggregation methods.
arXiv Detail & Related papers (2025-02-16T10:33:37Z) - Position: A taxonomy for reporting and describing AI security incidents [57.98317583163334]
We argue that specific are required to describe and report security incidents of AI systems.
Existing frameworks for either non-AI security or generic AI safety incident reporting are insufficient to capture the specific properties of AI security.
arXiv Detail & Related papers (2024-12-19T13:50:26Z) - Establishing Minimum Elements for Effective Vulnerability Management in AI Software [4.067778725390327]
This paper discusses the minimum elements for AI vulnerability management and the establishment of an Artificial Intelligence Vulnerability Database (AIVD)
It presents standardized formats and protocols for disclosing, analyzing, cataloging, and documenting AI vulnerabilities.
arXiv Detail & Related papers (2024-11-18T06:22:20Z) - Lessons for Editors of AI Incidents from the AI Incident Database [2.5165775267615205]
The AI Incident Database (AIID) is a project that catalogs AI incidents and supports further research by providing a platform to classify incidents.
This study reviews the AIID's dataset of 750+ AI incidents and two independent ambiguities applied to these incidents to identify common challenges to indexing and analyzing AI incidents.
We report mitigations to make incident processes more robust to uncertainty related to cause, extent of harm, severity, or technical details of implicated systems.
arXiv Detail & Related papers (2024-09-24T19:46:58Z) - Integrative Approaches in Cybersecurity and AI [0.0]
We identify key trends, challenges, and future directions that hold the potential to revolutionize the way organizations protect, analyze, and leverage their data.
Our findings highlight the necessity of cross-disciplinary strategies that incorporate AI-driven automation, real-time threat detection, and advanced data analytics to build more resilient and adaptive security architectures.
arXiv Detail & Related papers (2024-08-12T01:37:06Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.<n>Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.<n>However, the deployment of these agents in physical environments presents significant safety challenges.<n>This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Generative AI for Secure and Privacy-Preserving Mobile Crowdsensing [74.58071278710896]
generative AI has attracted much attention from both academic and industrial fields.
Secure and privacy-preserving mobile crowdsensing (SPPMCS) has been widely applied in data collection/ acquirement.
arXiv Detail & Related papers (2024-05-17T04:00:58Z) - Coordinated Flaw Disclosure for AI: Beyond Security Vulnerabilities [1.3225694028747144]
We propose a Coordinated Flaw Disclosure framework tailored to the complexities of machine learning (ML) issues.
Our framework introduces innovations such as extended model cards, dynamic scope expansion, an independent adjudication panel, and an automated verification process.
We argue that CFD could significantly enhance public trust in AI systems.
arXiv Detail & Related papers (2024-02-10T20:39:04Z) - RANK: AI-assisted End-to-End Architecture for Detecting Persistent
Attacks in Enterprise Networks [2.294014185517203]
We present an end-to-end AI-assisted architecture for detecting Advanced Persistent Threats (APTs)
The architecture is composed of four consecutive steps: 1) alert templating and merging, 2) alert graph construction, 3) alert graph partitioning into incidents, and 4) incident scoring and ordering.
Extensive results are provided showing a three order of magnitude reduction in the amount of data to be reviewed by the analyst, innovative extraction of incidents and security-wise scoring of extracted incidents.
arXiv Detail & Related papers (2021-01-06T15:59:51Z) - Predicting Themes within Complex Unstructured Texts: A Case Study on
Safeguarding Reports [66.39150945184683]
We focus on the problem of automatically identifying the main themes in a safeguarding report using supervised classification approaches.
Our results show the potential of deep learning models to simulate subject-expert behaviour even for complex tasks with limited labelled data.
arXiv Detail & Related papers (2020-10-27T19:48:23Z) - Data Mining with Big Data in Intrusion Detection Systems: A Systematic
Literature Review [68.15472610671748]
Cloud computing has become a powerful and indispensable technology for complex, high performance and scalable computation.
The rapid rate and volume of data creation has begun to pose significant challenges for data management and security.
The design and deployment of intrusion detection systems (IDS) in the big data setting has, therefore, become a topic of importance.
arXiv Detail & Related papers (2020-05-23T20:57:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.