Trusted Artificial Intelligence: Towards Certification of Machine
Learning Applications
- URL: http://arxiv.org/abs/2103.16910v1
- Date: Wed, 31 Mar 2021 08:59:55 GMT
- Title: Trusted Artificial Intelligence: Towards Certification of Machine
Learning Applications
- Authors: Philip Matthias Winter, Sebastian Eder, Johannes Weissenb\"ock,
Christoph Schwald, Thomas Doms, Tom Vogt, Sepp Hochreiter, Bernhard Nessler
- Abstract summary: The T"UV AUSTRIA Group in cooperation with the Institute for Machine Learning at the Johannes Kepler University Linz proposes a certification process.
The holistic approach attempts to evaluate and verify the aspects of secure software development, functional requirements, data quality, data protection, and ethics.
The audit catalog can be applied to low-risk applications within the scope of supervised learning.
- Score: 5.7576910363986
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Intelligence is one of the fastest growing technologies of the
21st century and accompanies us in our daily lives when interacting with
technical applications. However, reliance on such technical systems is crucial
for their widespread applicability and acceptance. The societal tools to
express reliance are usually formalized by lawful regulations, i.e., standards,
norms, accreditations, and certificates. Therefore, the T\"UV AUSTRIA Group in
cooperation with the Institute for Machine Learning at the Johannes Kepler
University Linz, proposes a certification process and an audit catalog for
Machine Learning applications. We are convinced that our approach can serve as
the foundation for the certification of applications that use Machine Learning
and Deep Learning, the techniques that drive the current revolution in
Artificial Intelligence. While certain high-risk areas, such as fully
autonomous robots in workspaces shared with humans, are still some time away
from certification, we aim to cover low-risk applications with our
certification procedure. Our holistic approach attempts to analyze Machine
Learning applications from multiple perspectives to evaluate and verify the
aspects of secure software development, functional requirements, data quality,
data protection, and ethics. Inspired by existing work, we introduce four
criticality levels to map the criticality of a Machine Learning application
regarding the impact of its decisions on people, environment, and
organizations. Currently, the audit catalog can be applied to low-risk
applications within the scope of supervised learning as commonly encountered in
industry. Guided by field experience, scientific developments, and market
demands, the audit catalog will be extended and modified accordingly.
Related papers
- A Systematic Literature Review on the Use of Machine Learning in Software Engineering [0.0]
The study was carried out following the objective and the research questions to explore the current state of the art in applying machine learning techniques in software engineering processes.
The review identifies the key areas within software engineering where ML has been applied, including software quality assurance, software maintenance, software comprehension, and software documentation.
arXiv Detail & Related papers (2024-06-19T23:04:27Z) - Enabling Automated Machine Learning for Model-Driven AI Engineering [60.09869520679979]
We propose a novel approach to enable Model-Driven Software Engineering and Model-Driven AI Engineering.
In particular, we support Automated ML, thus assisting software engineers without deep AI knowledge in developing AI-intensive systems.
arXiv Detail & Related papers (2022-03-06T10:12:56Z) - From Machine Learning to Robotics: Challenges and Opportunities for
Embodied Intelligence [113.06484656032978]
Article argues that embodied intelligence is a key driver for the advancement of machine learning technology.
We highlight challenges and opportunities specific to embodied intelligence.
We propose research directions which may significantly advance the state-of-the-art in robot learning.
arXiv Detail & Related papers (2021-10-28T16:04:01Z) - Towards Fairness Certification in Artificial Intelligence [31.920661197618195]
We propose a first joint effort to define the operational steps needed for AI fairness certification.
We will overview the criteria that should be met by an AI system before coming into official service and the conformity assessment procedures useful to monitor its functioning for fair decisions.
arXiv Detail & Related papers (2021-06-04T14:12:12Z) - A Review of Formal Methods applied to Machine Learning [0.6853165736531939]
We review state-of-the-art formal methods applied to the emerging field of the verification of machine learning systems.
We first recall established formal methods and their current use in an exemplar safety-critical field, avionic software.
We then provide a comprehensive and detailed review of the formal methods developed so far for machine learning, highlighting their strengths and limitations.
arXiv Detail & Related papers (2021-04-06T12:48:17Z) - Technology Readiness Levels for Machine Learning Systems [107.56979560568232]
Development and deployment of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
We have developed a proven systems engineering approach for machine learning development and deployment.
Our "Machine Learning Technology Readiness Levels" framework defines a principled process to ensure robust, reliable, and responsible systems.
arXiv Detail & Related papers (2021-01-11T15:54:48Z) - Technology Readiness Levels for AI & ML [79.22051549519989]
Development of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
Engineering systems follow well-defined processes and testing standards to streamline development for high-quality, reliable results.
We propose a proven systems engineering approach for machine learning development and deployment.
arXiv Detail & Related papers (2020-06-21T17:14:34Z) - Quality Management of Machine Learning Systems [0.0]
Artificial Intelligence (AI) has become a part of our daily lives due to major advances in Machine Learning (ML) techniques.
For business/mission-critical systems, serious concerns about reliability and maintainability of AI applications remain.
This paper presents a view of a holistic quality management framework for ML applications based on the current advances.
arXiv Detail & Related papers (2020-06-16T21:34:44Z) - Machine Learning for Software Engineering: A Systematic Mapping [73.30245214374027]
The software development industry is rapidly adopting machine learning for transitioning modern day software systems towards highly intelligent and self-learning systems.
No comprehensive study exists that explores the current state-of-the-art on the adoption of machine learning across software engineering life cycle stages.
This study introduces a machine learning for software engineering (MLSE) taxonomy classifying the state-of-the-art machine learning techniques according to their applicability to various software engineering life cycle stages.
arXiv Detail & Related papers (2020-05-27T11:56:56Z) - Towards CRISP-ML(Q): A Machine Learning Process Model with Quality
Assurance Methodology [53.063411515511056]
We propose a process model for the development of machine learning applications.
The first phase combines business and data understanding as data availability oftentimes affects the feasibility of the project.
The sixth phase covers state-of-the-art approaches for monitoring and maintenance of a machine learning applications.
arXiv Detail & Related papers (2020-03-11T08:25:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.