AI Certification: Advancing Ethical Practice by Reducing Information
Asymmetries
- URL: http://arxiv.org/abs/2105.10356v1
- Date: Thu, 20 May 2021 08:27:29 GMT
- Title: AI Certification: Advancing Ethical Practice by Reducing Information
Asymmetries
- Authors: Peter Cihon, Moritz J. Kleinaltenkamp, Jonas Schuett, Seth D. Baum
- Abstract summary: This paper draws from management literature on certification and reviews current AI certification programs and proposals.
The review indicates that the field currently focuses on self-certification and third-party certification of systems, individuals, and organizations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As artificial intelligence (AI) systems are increasingly deployed, principles
for ethical AI are also proliferating. Certification offers a method to both
incentivize adoption of these principles and substantiate that they have been
implemented in practice. This paper draws from management literature on
certification and reviews current AI certification programs and proposals.
Successful programs rely on both emerging technical methods and specific design
considerations. In order to avoid two common failures of certification, program
designs should ensure that the symbol of the certification is substantially
implemented in practice and that the program achieves its stated goals. The
review indicates that the field currently focuses on self-certification and
third-party certification of systems, individuals, and organizations - to the
exclusion of process management certifications. Additionally, the paper
considers prospects for future AI certification programs. Ongoing changes in AI
technology suggest that AI certification regimes should be designed to
emphasize governance criteria of enduring value, such as ethics training for AI
developers, and to adjust technical criteria as the technology changes.
Overall, certification can play a valuable mix in the portfolio of AI
governance tools.
Related papers
- Practical Application and Limitations of AI Certification Catalogues in the Light of the AI Act [0.1433758865948252]
This work focuses on the practical application and limitations of existing certification catalogues in the light of the AI Act.
We use the AI Assessment Catalogue as a comprehensive tool to systematically assess an AI model's compliance with certification standards.
We observe the limitations of an AI system that has no active development team anymore and highlight the importance of complete system documentation.
arXiv Detail & Related papers (2025-01-20T15:54:57Z) - Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Catalog of General Ethical Requirements for AI Certification [0.0]
We present overall ethical requirements and six ethical principles with value-specific recommendations for tools to implement these principles into technology.
Our work is aimed at stakeholders who can take it as a potential blueprint to fulfill minimum ethical requirements for trustworthy AI and AI Certification.
arXiv Detail & Related papers (2024-08-22T10:58:41Z) - The Contribution of XAI for the Safe Development and Certification of AI: An Expert-Based Analysis [4.119574613934122]
The black-box nature of machine learning models limits the use of conventional avenues of approach towards certifying complex technical systems.
As a potential solution, methods to give insights into this black-box could be used.
We find that XAI methods can be a helpful asset for safe AI development, but since certification relies on comprehensive and correct information about technical systems, their impact is expected to be limited.
arXiv Detail & Related papers (2024-07-22T16:08:21Z) - Open Problems in Technical AI Governance [93.89102632003996]
Technical AI governance refers to technical analysis and tools for supporting the effective governance of AI.
This paper is intended as a resource for technical researchers or research funders looking to contribute to AI governance.
arXiv Detail & Related papers (2024-07-20T21:13:56Z) - No Trust without regulation! [0.0]
The explosion in performance of Machine Learning (ML) and the potential of its applications are encouraging us to consider its use in industrial systems.
It is still leaving too much to one side the issue of safety and its corollary, regulation and standards.
The European Commission has laid the foundations for moving forward and building solid approaches to the integration of AI-based applications that are safe, trustworthy and respect European ethical values.
arXiv Detail & Related papers (2023-09-27T09:08:41Z) - FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare [73.78776682247187]
Concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI.
This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare.
arXiv Detail & Related papers (2023-08-11T10:49:05Z) - Guideline for Trustworthy Artificial Intelligence -- AI Assessment
Catalog [0.0]
It is clear that AI and business models based on it can only reach their full potential if AI applications are developed according to high quality standards.
The issue of the trustworthiness of AI applications is crucial and is the subject of numerous major publications.
This AI assessment catalog addresses exactly this point and is intended for two target groups.
arXiv Detail & Related papers (2023-06-20T08:07:18Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.