Practical Application and Limitations of AI Certification Catalogues in the Light of the AI Act
- URL: http://arxiv.org/abs/2502.10398v2
- Date: Tue, 18 Feb 2025 07:59:54 GMT
- Title: Practical Application and Limitations of AI Certification Catalogues in the Light of the AI Act
- Authors: Gregor Autischer, Kerstin Waxnegger, Dominik Kowald,
- Abstract summary: This work focuses on the practical application and limitations of existing certification catalogues in the light of the AI Act.<n>We use the AI Assessment Catalogue as a comprehensive tool to systematically assess an AI model's compliance with certification standards.<n>We observe the limitations of an AI system that has no active development team anymore and highlight the importance of complete system documentation.
- Score: 0.1433758865948252
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work-in-progress, we investigate the certification of AI systems, focusing on the practical application and limitations of existing certification catalogues in the light of the AI Act by attempting to certify a publicly available AI system. We aim to evaluate how well current approaches work to effectively certify an AI system, and how publicly accessible AI systems, that might not be actively maintained or initially intended for certification, can be selected and used for a sample certification process. Our methodology involves leveraging the Fraunhofer AI Assessment Catalogue as a comprehensive tool to systematically assess an AI model's compliance with certification standards. We find that while the catalogue effectively structures the evaluation process, it can also be cumbersome and time-consuming to use. We observe the limitations of an AI system that has no active development team anymore and highlighted the importance of complete system documentation. Finally, we identify some limitations of the certification catalogues used and proposed ideas on how to streamline the certification process.
Related papers
- Using Formal Models, Safety Shields and Certified Control to Validate AI-Based Train Systems [0.5249805590164903]
The KI-LOK project explores new methods for certifying and safely integrating AI components into autonomous trains.
We pursue a two-layered approach: (1) ensuring the safety of the steering system by formal analysis using the B method, and (2) improving the reliability of the perception system with a runtime certificate checker.
This work links both strategies within a demonstrator that runs simulations on the formal model, controlled by the real AI output and the real certificate checker.
arXiv Detail & Related papers (2024-11-21T18:09:04Z) - Towards Certified Unlearning for Deep Neural Networks [50.816473152067104]
certified unlearning has been extensively studied in convex machine learning models.<n>We propose several techniques to bridge the gap between certified unlearning and deep neural networks (DNNs)
arXiv Detail & Related papers (2024-08-01T21:22:10Z) - The Contribution of XAI for the Safe Development and Certification of AI: An Expert-Based Analysis [4.119574613934122]
The black-box nature of machine learning models limits the use of conventional avenues of approach towards certifying complex technical systems.
As a potential solution, methods to give insights into this black-box could be used.
We find that XAI methods can be a helpful asset for safe AI development, but since certification relies on comprehensive and correct information about technical systems, their impact is expected to be limited.
arXiv Detail & Related papers (2024-07-22T16:08:21Z) - The Foundations of Computational Management: A Systematic Approach to
Task Automation for the Integration of Artificial Intelligence into Existing
Workflows [55.2480439325792]
This article introduces Computational Management, a systematic approach to task automation.
The article offers three easy step-by-step procedures to begin the process of implementing AI within a workflow.
arXiv Detail & Related papers (2024-02-07T01:45:14Z) - A General Framework for Verification and Control of Dynamical Models via Certificate Synthesis [54.959571890098786]
We provide a framework to encode system specifications and define corresponding certificates.
We present an automated approach to formally synthesise controllers and certificates.
Our approach contributes to the broad field of safe learning for control, exploiting the flexibility of neural networks.
arXiv Detail & Related papers (2023-09-12T09:37:26Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Rethinking Certification for Trustworthy Machine Learning-Based
Applications [3.886429361348165]
Machine Learning (ML) is increasingly used to implement advanced applications with non-deterministic behavior.
Existing certification schemes are not immediately applicable to non-deterministic applications built on ML models.
This article analyzes the challenges and deficiencies of current certification schemes, discusses open research issues, and proposes a first certification scheme for ML-based applications.
arXiv Detail & Related papers (2023-05-26T11:06:28Z) - Developing an AI-enabled IIoT platform -- Lessons learned from early use
case validation [47.37985501848305]
We introduce the design of this platform and discuss an early evaluation in terms of a demonstrator for AI-enabled visual quality inspection.
This is complemented by insights and lessons learned during this early evaluation activity.
arXiv Detail & Related papers (2022-07-10T18:51:12Z) - Certifiable Artificial Intelligence Through Data Fusion [7.103626867766158]
This paper reviews and proposes concerns in adopting, fielding, and maintaining artificial intelligence (AI) systems.
A notional use case is presented with image data fusion to support AI object recognition certifiability considering precision versus distance.
arXiv Detail & Related papers (2021-11-03T03:34:19Z) - AI Certification: Advancing Ethical Practice by Reducing Information
Asymmetries [0.0]
This paper draws from management literature on certification and reviews current AI certification programs and proposals.
The review indicates that the field currently focuses on self-certification and third-party certification of systems, individuals, and organizations.
arXiv Detail & Related papers (2021-05-20T08:27:29Z) - Towards an Interface Description Template for AI-enabled Systems [77.34726150561087]
Reuse is a common system architecture approach that seeks to instantiate a system architecture with existing components.
There is currently no framework that guides the selection of necessary information to assess their portability to operate in a system different than the one for which the component was originally purposed.
We present ongoing work on establishing an interface description template that captures the main information of an AI-enabled component.
arXiv Detail & Related papers (2020-07-13T20:30:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.