Model Reporting for Certifiable AI: A Proposal from Merging EU
Regulation into AI Development
- URL: http://arxiv.org/abs/2307.11525v1
- Date: Fri, 21 Jul 2023 12:13:54 GMT
- Title: Model Reporting for Certifiable AI: A Proposal from Merging EU
Regulation into AI Development
- Authors: Danilo Brajovic, Niclas Renner, Vincent Philipp Goebels, Philipp
Wagner, Benjamin Fresz, Martin Biller, Mara Klaeb, Janika Kutz, Jens
Neuhuettler, Marco F. Huber
- Abstract summary: Despite large progress in Explainable and Safe AI, practitioners suffer from a lack of regulation and standards for AI safety.
We propose the use of standardized cards to document AI applications throughout the development process.
- Score: 2.9620297386658185
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite large progress in Explainable and Safe AI, practitioners suffer from
a lack of regulation and standards for AI safety. In this work we merge recent
regulation efforts by the European Union and first proposals for AI guidelines
with recent trends in research: data and model cards. We propose the use of
standardized cards to document AI applications throughout the development
process. Our main contribution is the introduction of use-case and operation
cards, along with updates for data and model cards to cope with regulatory
requirements. We reference both recent research as well as the source of the
regulation in our cards and provide references to additional support material
and toolboxes whenever possible. The goal is to design cards that help
practitioners develop safe AI systems throughout the development process, while
enabling efficient third-party auditing of AI applications, being easy to
understand, and building trust in the system. Our work incorporates insights
from interviews with certification experts as well as developers and
individuals working with the developed AI applications.
Related papers
- AI Cards: Towards an Applied Framework for Machine-Readable AI and Risk Documentation Inspired by the EU AI Act [2.1897070577406734]
Despite its importance, there is a lack of standards and guidelines to assist with drawing up AI and risk documentation aligned with the AI Act.
We propose AI Cards as a novel holistic framework for representing a given intended use of an AI system.
arXiv Detail & Related papers (2024-06-26T09:51:49Z) - The Ethics of Advanced AI Assistants [53.89899371095332]
This paper focuses on the opportunities and the ethical and societal risks posed by advanced AI assistants.
We define advanced AI assistants as artificial agents with natural language interfaces, whose function is to plan and execute sequences of actions on behalf of a user.
We consider the deployment of advanced assistants at a societal scale, focusing on cooperation, equity and access, misinformation, economic impact, the environment and how best to evaluate advanced AI assistants.
arXiv Detail & Related papers (2024-04-24T23:18:46Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Use case cards: a use case reporting framework inspired by the European
AI Act [0.0]
We propose a new framework for the documentation of use cases, that we call "use case cards"
Unlike other documentation methodologies, we focus on the purpose and operational use of an AI system.
The proposed framework is the result of a co-design process involving a relevant team of EU policy experts and scientists.
arXiv Detail & Related papers (2023-06-23T15:47:19Z) - AI Usage Cards: Responsibly Reporting AI-generated Content [25.848910414962337]
Given AI systems like ChatGPT can generate content that is indistinguishable from human-made work, the responsible use of this technology is a growing concern.
We propose a three-dimensional model consisting of transparency, integrity, and accountability to define the responsible use of AI.
Second, we introduce AI Usage Cards'', a standardized way to report the use of AI in scientific research.
arXiv Detail & Related papers (2023-02-16T08:41:31Z) - Enabling Automated Machine Learning for Model-Driven AI Engineering [60.09869520679979]
We propose a novel approach to enable Model-Driven Software Engineering and Model-Driven AI Engineering.
In particular, we support Automated ML, thus assisting software engineers without deep AI knowledge in developing AI-intensive systems.
arXiv Detail & Related papers (2022-03-06T10:12:56Z) - Exploring the Assessment List for Trustworthy AI in the Context of
Advanced Driver-Assistance Systems [5.386962356892352]
The European Commission appointed experts to a High-Level Expert Group on AI (AI-HLEG)
AI-HLEG defined Trustworthy AI as 1) lawful, 2) ethical, and 3) robust and specified seven corresponding key requirements.
We present an illustrative case study from applying ALTAI to an ongoing development project of an Advanced Driver-Assistance System.
arXiv Detail & Related papers (2021-03-04T21:48:11Z) - Behavioral Use Licensing for Responsible AI [11.821476868900506]
We advocate the use of licensing to enable legally enforceable behavioral use conditions on software and code.
We envision how licensing may be implemented in accordance with existing responsible AI guidelines.
arXiv Detail & Related papers (2020-11-04T09:23:28Z) - Trustworthy AI Inference Systems: An Industry Research View [58.000323504158054]
We provide an industry research view for approaching the design, deployment, and operation of trustworthy AI inference systems.
We highlight opportunities and challenges in AI systems using trusted execution environments.
We outline areas of further development that require the global collective attention of industry, academia, and government researchers.
arXiv Detail & Related papers (2020-08-10T23:05:55Z) - A Methodology for Creating AI FactSheets [67.65802440158753]
This paper describes a methodology for creating the form of AI documentation we call FactSheets.
Within each step of the methodology, we describe the issues to consider and the questions to explore.
This methodology will accelerate the broader adoption of transparent AI documentation.
arXiv Detail & Related papers (2020-06-24T15:08:59Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.