Interplay of ISMS and AIMS in context of the EU AI Act
- URL: http://arxiv.org/abs/2412.18670v1
- Date: Tue, 24 Dec 2024 20:13:19 GMT
- Title: Interplay of ISMS and AIMS in context of the EU AI Act
- Authors: Jordan Pötsch,
- Abstract summary: The EU AI Act (AIA) mandates the implementation of a risk management system (RMS) and a quality management system (QMS) for high-risk AI systems.<n>This paper examines the interfaces between an information security management system (ISMS) and an AI management system (AIMS)<n>Four new AI modules are introduced, proposed for inclusion in the BSI Grundschutz framework to comprehensively ensure the security of AI systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The EU AI Act (AIA) mandates the implementation of a risk management system (RMS) and a quality management system (QMS) for high-risk AI systems. The ISO/IEC 42001 standard provides a foundation for fulfilling these requirements but does not cover all EU-specific regulatory stipulations. To enhance the implementation of the AIA in Germany, the Federal Office for Information Security (BSI) could introduce the national standard BSI 200-5, which specifies AIA requirements and integrates existing ISMS standards, such as ISO/IEC 27001. This paper examines the interfaces between an information security management system (ISMS) and an AI management system (AIMS), demonstrating that incorporating existing ISMS controls with specific AI extensions presents an effective strategy for complying with Article 15 of the AIA. Four new AI modules are introduced, proposed for inclusion in the BSI IT Grundschutz framework to comprehensively ensure the security of AI systems. Additionally, an approach for adapting BSI's qualification and certification systems is outlined to ensure that expertise in secure AI handling is continuously developed. Finally, the paper discusses how the BSI could bridge international standards and the specific requirements of the AIA through the nationalization of ISO/IEC 42001, creating synergies and bolstering the competitiveness of the German AI landscape.
Related papers
- Enhancing Trust Through Standards: A Comparative Risk-Impact Framework for Aligning ISO AI Standards with Global Ethical and Regulatory Contexts [0.0]
ISO standards aim to foster responsible development by embedding fairness, transparency, and risk management into AI systems.
Their effectiveness varies across diverse regulatory landscapes, from the EU's risk-based AI Act to China's stability-focused measures.
This paper introduces a novel Comparative Risk-Impact Assessment Framework to evaluate how well ISO standards address ethical risks.
arXiv Detail & Related papers (2025-04-22T00:44:20Z) - In-House Evaluation Is Not Enough: Towards Robust Third-Party Flaw Disclosure for General-Purpose AI [93.33036653316591]
We call for three interventions to advance system safety.
First, we propose using standardized AI flaw reports and rules of engagement for researchers.
Second, we propose GPAI system providers adopt broadly-scoped flaw disclosure programs.
Third, we advocate for the development of improved infrastructure to coordinate distribution of flaw reports.
arXiv Detail & Related papers (2025-03-21T05:09:46Z) - AILuminate: Introducing v1.0 of the AI Risk and Reliability Benchmark from MLCommons [62.374792825813394]
This paper introduces AILuminate v1.0, the first comprehensive industry-standard benchmark for assessing AI-product risk and reliability.
The benchmark evaluates an AI system's resistance to prompts designed to elicit dangerous, illegal, or undesirable behavior in 12 hazard categories.
arXiv Detail & Related papers (2025-02-19T05:58:52Z) - Position: A taxonomy for reporting and describing AI security incidents [57.98317583163334]
We argue that specific are required to describe and report security incidents of AI systems.
Existing frameworks for either non-AI security or generic AI safety incident reporting are insufficient to capture the specific properties of AI security.
arXiv Detail & Related papers (2024-12-19T13:50:26Z) - Design of a Quality Management System based on the EU Artificial Intelligence Act [0.0]
The EU AI Act mandates that providers and deployers of high-risk AI systems establish a quality management system (QMS)
This paper introduces a new design concept and prototype for a QMS as a microservice Software as a Service web application.
arXiv Detail & Related papers (2024-08-08T12:14:02Z) - AIR-Bench 2024: A Safety Benchmark Based on Risk Categories from Regulations and Policies [80.90138009539004]
AIR-Bench 2024 is the first AI safety benchmark aligned with emerging government regulations and company policies.
It decomposes 8 government regulations and 16 company policies into a four-tiered safety taxonomy with granular risk categories in the lowest tier.
We evaluate leading language models on AIR-Bench 2024, uncovering insights into their alignment with specified safety concerns.
arXiv Detail & Related papers (2024-07-11T21:16:48Z) - AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies [88.32153122712478]
We identify 314 unique risk categories organized into a four-tiered taxonomy.
At the highest level, this taxonomy encompasses System & Operational Risks, Content Safety Risks, Societal Risks, and Legal & Rights Risks.
We aim to advance AI safety through information sharing across sectors and the promotion of best practices in risk mitigation for generative AI models and systems.
arXiv Detail & Related papers (2024-06-25T18:13:05Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Navigating the EU AI Act: A Methodological Approach to Compliance for Safety-critical Products [0.0]
This paper presents a methodology for interpreting the EU AI Act requirements for high-risk AI systems.
We first propose an extended product quality model for AI systems, incorporating attributes relevant to the Act not covered by current quality models.
We then propose a contract-based approach to derive technical requirements at the stakeholder level.
arXiv Detail & Related papers (2024-03-25T14:32:18Z) - Towards an AI Accountability Policy [16.59829043755575]
We examine how high-risk technologies have been successfully regulated at the national level.
We propose a tiered system of explainability and benchmarking requirements for commercial AI systems.
arXiv Detail & Related papers (2023-07-25T17:09:28Z) - Conformity Assessments and Post-market Monitoring: A Guide to the Role
of Auditing in the Proposed European AI Regulation [0.0]
We describe and discuss the two primary enforcement mechanisms proposed in the European Artificial Intelligence Act.
We argue that AIA can be interpreted as a proposal to establish a Europe-wide ecosystem for conducting AI auditing.
arXiv Detail & Related papers (2021-11-09T11:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.