Position: A taxonomy for reporting and describing AI security incidents
- URL: http://arxiv.org/abs/2412.14855v2
- Date: Wed, 26 Feb 2025 07:59:51 GMT
- Title: Position: A taxonomy for reporting and describing AI security incidents
- Authors: Lukas Bieringer, Kevin Paeth, Jochen Stängler, Andreas Wespi, Alexandre Alahi, Kathrin Grosse,
- Abstract summary: We argue that specific are required to describe and report security incidents of AI systems.<n>Existing frameworks for either non-AI security or generic AI safety incident reporting are insufficient to capture the specific properties of AI security.
- Score: 57.98317583163334
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As AI usage becomes more ubiquitous, AI incident reporting is both practiced increasingly in industry and mandated by regulatory requirements. At the same time, it is established that AI systems are exploited in practice by a growing number of security threats. Yet, organizations and practitioners lack necessary guidance in describing AI security incidents. In this position paper, we argue that specific taxonomies are required to describe and report security incidents of AI systems. In other words, existing frameworks for either non-AI security or generic AI safety incident reporting are insufficient to capture the specific properties of AI security. To demonstrate our position, we offer an AI security incident taxonomy and highlight relevant properties, such as machine readability and integration with existing frameworks. We have derived this proposal from interviews with experts, aiming for standardized reporting of AI security incidents, which meets the requirements of affected stakeholder groups. We hope that this taxonomy sparks discussions and eventually allows the sharing of AI security incidents across organizations, enabling more secure AI.
Related papers
- AI threats to national security can be countered through an incident regime [55.2480439325792]
We propose a legally mandated post-deployment AI incident regime that aims to counter potential national security threats from AI systems.
Our proposed AI incident regime is split into three phases. The first phase revolves around a novel operationalization of what counts as an 'AI incident'
The second and third phases spell out that AI providers should notify a government agency about incidents, and that the government agency should be involved in amending AI providers' security and safety procedures.
arXiv Detail & Related papers (2025-03-25T17:51:50Z) - In-House Evaluation Is Not Enough: Towards Robust Third-Party Flaw Disclosure for General-Purpose AI [93.33036653316591]
We call for three interventions to advance system safety.
First, we propose using standardized AI flaw reports and rules of engagement for researchers.
Second, we propose GPAI system providers adopt broadly-scoped flaw disclosure programs.
Third, we advocate for the development of improved infrastructure to coordinate distribution of flaw reports.
arXiv Detail & Related papers (2025-03-21T05:09:46Z) - The BIG Argument for AI Safety Cases [4.0675753909100445]
The BIG argument adopts a whole-system approach to constructing a safety case for AI systems of varying capability, autonomy and criticality.
It is balanced by addressing safety alongside other critical ethical issues such as privacy and equity.
It is integrated by bringing together the social, ethical and technical aspects of safety assurance in a way that is traceable and accountable.
arXiv Detail & Related papers (2025-03-12T11:33:28Z) - Landscape of AI safety concerns -- A methodology to support safety assurance for AI-based autonomous systems [0.0]
AI has emerged as a key technology, driving advancements across a range of applications.
The challenge of assuring safety in systems that incorporate AI components is substantial.
We propose a novel methodology designed to support the creation of safety assurance cases for AI-based systems.
arXiv Detail & Related papers (2024-12-18T16:38:16Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? [59.96471873997733]
We propose an empirical foundation for developing more meaningful safety metrics and define AI safety in a machine learning research context.
We aim to provide a more rigorous framework for AI safety research, advancing the science of safety evaluations and clarifying the path towards measurable progress.
arXiv Detail & Related papers (2024-07-31T17:59:24Z) - AI Cards: Towards an Applied Framework for Machine-Readable AI and Risk Documentation Inspired by the EU AI Act [2.1897070577406734]
Despite its importance, there is a lack of standards and guidelines to assist with drawing up AI and risk documentation aligned with the AI Act.
We propose AI Cards as a novel holistic framework for representing a given intended use of an AI system.
arXiv Detail & Related papers (2024-06-26T09:51:49Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Safety Cases: How to Justify the Safety of Advanced AI Systems [5.097102520834254]
As AI systems become more advanced, companies and regulators will make difficult decisions about whether it is safe to train and deploy them.
We propose a framework for organizing a safety case and discuss four categories of arguments to justify safety.
We evaluate concrete examples of arguments in each category and outline how arguments could be combined to justify that AI systems are safe to deploy.
arXiv Detail & Related papers (2024-03-15T16:53:13Z) - Adversarial Machine Learning and Cybersecurity: Risks, Challenges, and
Legal Implications [0.4665186371356556]
In July 2022, the Center for Security and Emerging Technology at Georgetown University and the Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center convened a workshop of experts to examine the relationship between vulnerabilities in artificial intelligence systems and more traditional types of software vulnerabilities.
Topics discussed included the extent to which AI vulnerabilities can be handled under standard cybersecurity processes, the barriers currently preventing the accurate sharing of information about AI vulnerabilities, legal issues associated with adversarial attacks on AI systems, and potential areas where government support could improve AI vulnerability management and mitigation.
arXiv Detail & Related papers (2023-05-23T22:27:53Z) - Adversarial AI in Insurance: Pervasiveness and Resilience [0.0]
We study Adversarial Attacks, which consist of the creation of modified input data to deceive an AI system and produce false outputs.
We argue on defence methods and precautionary systems, considering that they can involve few-shot and zero-shot multilabelling.
A related topic, with growing interest, is the validation and verification of systems incorporating AI and ML components.
arXiv Detail & Related papers (2023-01-17T08:49:54Z) - Trustworthy AI Inference Systems: An Industry Research View [58.000323504158054]
We provide an industry research view for approaching the design, deployment, and operation of trustworthy AI inference systems.
We highlight opportunities and challenges in AI systems using trusted execution environments.
We outline areas of further development that require the global collective attention of industry, academia, and government researchers.
arXiv Detail & Related papers (2020-08-10T23:05:55Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z) - Vulnerabilities of Connectionist AI Applications: Evaluation and Defence [0.0]
This article deals with the IT security of connectionist artificial intelligence (AI) applications, focusing on threats to integrity.
A comprehensive list of threats and possible mitigations is presented by reviewing the state-of-the-art literature.
The discussion of mitigations is likewise not restricted to the level of the AI system itself but rather advocates viewing AI systems in the context of their supply chains.
arXiv Detail & Related papers (2020-03-18T12:33:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.