Interplay of ISMS and AIMS in context of the EU AI Act
- URL: http://arxiv.org/abs/2412.18670v1
- Date: Tue, 24 Dec 2024 20:13:19 GMT
- Title: Interplay of ISMS and AIMS in context of the EU AI Act
- Authors: Jordan Pötsch,
- Abstract summary: The EU AI Act (AIA) mandates the implementation of a risk management system (RMS) and a quality management system (QMS) for high-risk AI systems.
This paper examines the interfaces between an information security management system (ISMS) and an AI management system (AIMS)
Four new AI modules are introduced, proposed for inclusion in the BSI Grundschutz framework to comprehensively ensure the security of AI systems.
- Score: 0.0
- License:
- Abstract: The EU AI Act (AIA) mandates the implementation of a risk management system (RMS) and a quality management system (QMS) for high-risk AI systems. The ISO/IEC 42001 standard provides a foundation for fulfilling these requirements but does not cover all EU-specific regulatory stipulations. To enhance the implementation of the AIA in Germany, the Federal Office for Information Security (BSI) could introduce the national standard BSI 200-5, which specifies AIA requirements and integrates existing ISMS standards, such as ISO/IEC 27001. This paper examines the interfaces between an information security management system (ISMS) and an AI management system (AIMS), demonstrating that incorporating existing ISMS controls with specific AI extensions presents an effective strategy for complying with Article 15 of the AIA. Four new AI modules are introduced, proposed for inclusion in the BSI IT Grundschutz framework to comprehensively ensure the security of AI systems. Additionally, an approach for adapting BSI's qualification and certification systems is outlined to ensure that expertise in secure AI handling is continuously developed. Finally, the paper discusses how the BSI could bridge international standards and the specific requirements of the AIA through the nationalization of ISO/IEC 42001, creating synergies and bolstering the competitiveness of the German AI landscape.
Related papers
- GPAI Evaluations Standards Taskforce: Towards Effective AI Governance [0.0]
General-purpose AI evaluations have been proposed as a promising way of identifying and mitigating systemic risks posed by AI development and deployment.
No standards exist to date to promote their quality or legitimacy.
We propose an EU GPAI Evaluation Standards Taskforce to be housed within the bodies established by the EU AI Act.
arXiv Detail & Related papers (2024-11-21T03:14:31Z) - Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Enhancing Productivity with AI During the Development of an ISMS: Case Kempower [3.94000837747249]
This paper discusses how Kempower, a Finnish company, has effectively used generative AI to create and implement an ISMS.
This research studies how the use of generative AI can enhance the process of creating an ISMS.
arXiv Detail & Related papers (2024-09-26T20:37:31Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Design of a Quality Management System based on the EU Artificial Intelligence Act [0.0]
The EU AI Act mandates that providers and deployers of high-risk AI systems establish a quality management system (QMS)
This paper introduces a new design concept and prototype for a QMS as a microservice Software as a Service web application.
arXiv Detail & Related papers (2024-08-08T12:14:02Z) - AIR-Bench 2024: A Safety Benchmark Based on Risk Categories from Regulations and Policies [80.90138009539004]
AIR-Bench 2024 is the first AI safety benchmark aligned with emerging government regulations and company policies.
It decomposes 8 government regulations and 16 company policies into a four-tiered safety taxonomy with granular risk categories in the lowest tier.
We evaluate leading language models on AIR-Bench 2024, uncovering insights into their alignment with specified safety concerns.
arXiv Detail & Related papers (2024-07-11T21:16:48Z) - AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies [88.32153122712478]
We identify 314 unique risk categories organized into a four-tiered taxonomy.
At the highest level, this taxonomy encompasses System & Operational Risks, Content Safety Risks, Societal Risks, and Legal & Rights Risks.
We aim to advance AI safety through information sharing across sectors and the promotion of best practices in risk mitigation for generative AI models and systems.
arXiv Detail & Related papers (2024-06-25T18:13:05Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Navigating the EU AI Act: A Methodological Approach to Compliance for Safety-critical Products [0.0]
This paper presents a methodology for interpreting the EU AI Act requirements for high-risk AI systems.
We first propose an extended product quality model for AI systems, incorporating attributes relevant to the Act not covered by current quality models.
We then propose a contract-based approach to derive technical requirements at the stakeholder level.
arXiv Detail & Related papers (2024-03-25T14:32:18Z) - International Institutions for Advanced AI [47.449762587672986]
International institutions may have an important role to play in ensuring advanced AI systems benefit humanity.
This paper identifies a set of governance functions that could be performed at an international level to address these challenges.
It groups these functions into four institutional models that exhibit internal synergies and have precedents in existing organizations.
arXiv Detail & Related papers (2023-07-10T16:55:55Z) - Conformity Assessments and Post-market Monitoring: A Guide to the Role
of Auditing in the Proposed European AI Regulation [0.0]
We describe and discuss the two primary enforcement mechanisms proposed in the European Artificial Intelligence Act.
We argue that AIA can be interpreted as a proposal to establish a Europe-wide ecosystem for conducting AI auditing.
arXiv Detail & Related papers (2021-11-09T11:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.