Standardised schema and taxonomy for AI incident databases in critical digital infrastructure
- URL: http://arxiv.org/abs/2501.17037v1
- Date: Tue, 28 Jan 2025 15:59:01 GMT
- Title: Standardised schema and taxonomy for AI incident databases in critical digital infrastructure
- Authors: Avinash Agarwal, Manisha J. Nene,
- Abstract summary: The rapid deployment of Artificial Intelligence in critical digital infrastructure introduces significant risks.
Existing databases lack the granularity as well as the standardized structure required for consistent data collection and analysis.
This work proposes a standardized schema and taxonomy for AI incident databases, enabling detailed and structured documentation of AI incidents across sectors.
- Score: 2.209921757303168
- License:
- Abstract: The rapid deployment of Artificial Intelligence (AI) in critical digital infrastructure introduces significant risks, necessitating a robust framework for systematically collecting AI incident data to prevent future incidents. Existing databases lack the granularity as well as the standardized structure required for consistent data collection and analysis, impeding effective incident management. This work proposes a standardized schema and taxonomy for AI incident databases, addressing these challenges by enabling detailed and structured documentation of AI incidents across sectors. Key contributions include developing a unified schema, introducing new fields such as incident severity, causes, and harms caused, and proposing a taxonomy for classifying AI incidents in critical digital infrastructure. The proposed solution facilitates more effective incident data collection and analysis, thus supporting evidence-based policymaking, enhancing industry safety measures, and promoting transparency. This work lays the foundation for a coordinated global response to AI incidents, ensuring trust, safety, and accountability in using AI across regions.
Related papers
- Computational Safety for Generative AI: A Signal Processing Perspective [65.268245109828]
computational safety is a mathematical framework that enables the quantitative assessment, formulation, and study of safety challenges in GenAI.
We show how sensitivity analysis and loss landscape analysis can be used to detect malicious prompts with jailbreak attempts.
We discuss key open research challenges, opportunities, and the essential role of signal processing in computational AI safety.
arXiv Detail & Related papers (2025-02-18T02:26:50Z) - A Survey on Vulnerability Prioritization: Taxonomy, Metrics, and Research Challenges [20.407534993667607]
Resource constraints necessitate effective vulnerability prioritization strategies.
This paper introduces a novel taxonomy that categorizes metrics into severity, exploitability, contextual factors, predictive indicators, and aggregation methods.
arXiv Detail & Related papers (2025-02-16T10:33:37Z) - Safety at Scale: A Comprehensive Survey of Large Model Safety [299.801463557549]
We present a comprehensive taxonomy of safety threats to large models, including adversarial attacks, data poisoning, backdoor attacks, jailbreak and prompt injection attacks, energy-latency attacks, data and model extraction attacks, and emerging agent-specific threats.
We identify and discuss the open challenges in large model safety, emphasizing the need for comprehensive safety evaluations, scalable and effective defense mechanisms, and sustainable data practices.
arXiv Detail & Related papers (2025-02-02T05:14:22Z) - Lessons for Editors of AI Incidents from the AI Incident Database [2.5165775267615205]
The AI Incident Database (AIID) is a project that catalogs AI incidents and supports further research by providing a platform to classify incidents.
This study reviews the AIID's dataset of 750+ AI incidents and two independent ambiguities applied to these incidents to identify common challenges to indexing and analyzing AI incidents.
We report mitigations to make incident processes more robust to uncertainty related to cause, extent of harm, severity, or technical details of implicated systems.
arXiv Detail & Related papers (2024-09-24T19:46:58Z) - Integrative Approaches in Cybersecurity and AI [0.0]
We identify key trends, challenges, and future directions that hold the potential to revolutionize the way organizations protect, analyze, and leverage their data.
Our findings highlight the necessity of cross-disciplinary strategies that incorporate AI-driven automation, real-time threat detection, and advanced data analytics to build more resilient and adaptive security architectures.
arXiv Detail & Related papers (2024-08-12T01:37:06Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Generative AI for Secure and Privacy-Preserving Mobile Crowdsensing [74.58071278710896]
generative AI has attracted much attention from both academic and industrial fields.
Secure and privacy-preserving mobile crowdsensing (SPPMCS) has been widely applied in data collection/ acquirement.
arXiv Detail & Related papers (2024-05-17T04:00:58Z) - Coordinated Flaw Disclosure for AI: Beyond Security Vulnerabilities [1.3225694028747144]
We propose a Coordinated Flaw Disclosure framework tailored to the complexities of machine learning (ML) issues.
Our framework introduces innovations such as extended model cards, dynamic scope expansion, an independent adjudication panel, and an automated verification process.
We argue that CFD could significantly enhance public trust in AI systems.
arXiv Detail & Related papers (2024-02-10T20:39:04Z) - RANK: AI-assisted End-to-End Architecture for Detecting Persistent
Attacks in Enterprise Networks [2.294014185517203]
We present an end-to-end AI-assisted architecture for detecting Advanced Persistent Threats (APTs)
The architecture is composed of four consecutive steps: 1) alert templating and merging, 2) alert graph construction, 3) alert graph partitioning into incidents, and 4) incident scoring and ordering.
Extensive results are provided showing a three order of magnitude reduction in the amount of data to be reviewed by the analyst, innovative extraction of incidents and security-wise scoring of extracted incidents.
arXiv Detail & Related papers (2021-01-06T15:59:51Z) - Predicting Themes within Complex Unstructured Texts: A Case Study on
Safeguarding Reports [66.39150945184683]
We focus on the problem of automatically identifying the main themes in a safeguarding report using supervised classification approaches.
Our results show the potential of deep learning models to simulate subject-expert behaviour even for complex tasks with limited labelled data.
arXiv Detail & Related papers (2020-10-27T19:48:23Z) - Data Mining with Big Data in Intrusion Detection Systems: A Systematic
Literature Review [68.15472610671748]
Cloud computing has become a powerful and indispensable technology for complex, high performance and scalable computation.
The rapid rate and volume of data creation has begun to pose significant challenges for data management and security.
The design and deployment of intrusion detection systems (IDS) in the big data setting has, therefore, become a topic of importance.
arXiv Detail & Related papers (2020-05-23T20:57:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.