No Trust without regulation!
- URL: http://arxiv.org/abs/2311.06263v1
- Date: Wed, 27 Sep 2023 09:08:41 GMT
- Title: No Trust without regulation!
- Authors: Fran\c{c}ois Terrier (CEA List)
- Abstract summary: The explosion in performance of Machine Learning (ML) and the potential of its applications are encouraging us to consider its use in industrial systems.
It is still leaving too much to one side the issue of safety and its corollary, regulation and standards.
The European Commission has laid the foundations for moving forward and building solid approaches to the integration of AI-based applications that are safe, trustworthy and respect European ethical values.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The explosion in the performance of Machine Learning (ML) and the potential
of its applications are strongly encouraging us to consider its use in
industrial systems, including for critical functions such as decision-making in
autonomous systems. While the AI community is well aware of the need to ensure
the trustworthiness of AI-based applications, it is still leaving too much to
one side the issue of safety and its corollary, regulation and standards,
without which it is not possible to certify any level of safety, whether the
systems are slightly or very critical.The process of developing and qualifying
safety-critical software and systems in regulated industries such as aerospace,
nuclear power stations, railways or automotive industry has long been well
rationalized and mastered. They use well-defined standards, regulatory
frameworks and processes, as well as formal techniques to assess and
demonstrate the quality and safety of the systems and software they develop.
However, the low level of formalization of specifications and the uncertainties
and opacity of machine learning-based components make it difficult to validate
and verify them using most traditional critical systems engineering methods.
This raises the question of qualification standards, and therefore of
regulations adapted to AI. With the AI Act, the European Commission has laid
the foundations for moving forward and building solid approaches to the
integration of AI-based applications that are safe, trustworthy and respect
European ethical values. The question then becomes "How can we rise to the
challenge of certification and propose methods and tools for trusted artificial
intelligence?"
Related papers
- Reliability, Resilience and Human Factors Engineering for Trustworthy AI Systems [6.120331132009475]
We offer a framework that integrates established reliability and resilience engineering principles into AI systems.
We propose an integrate framework to manage AI system performance, and prevent or efficiently recover from failures.
We apply our framework to a real-world AI system, using system status data from platforms such as openAI, to demonstrate its practical applicability.
arXiv Detail & Related papers (2024-11-13T19:16:44Z) - Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - The Necessity of AI Audit Standards Boards [0.0]
We argue that creating auditing standards is not just insufficient, but actively harmful by proliferating unheeded and inconsistent standards.
Instead, the paper proposes the establishment of an AI Audit Standards Board, responsible for developing and updating auditing methods and standards.
arXiv Detail & Related papers (2024-04-11T15:08:24Z) - Assurance Cases as Foundation Stone for Auditing AI-enabled and
Autonomous Systems: Workshop Results and Political Recommendations for Action
from the ExamAI Project [2.741266294612776]
We investigate the way safety standards define safety measures to be implemented against software faults.
Functional safety standards use Safety Integrity Levels (SILs) to define which safety measures shall be implemented.
We propose the use of assurance cases to argue that the individually selected and applied measures are sufficient.
arXiv Detail & Related papers (2022-08-17T10:05:07Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Towards a Policy-as-a-Service Framework to Enable Compliant, Trustworthy
AI and HRI Systems in the Wild [7.225523345649149]
Building trustworthy autonomous systems is challenging for many reasons beyond simply trying to engineer agents that 'always do the right thing'
There is a broader context that is often not considered within AI and HRI: that the problem of trustworthiness is inherently socio-technical.
This paper emphasizes the "fuzzy" socio-technical aspects of trustworthiness and the need for their careful consideration during both design and deployment.
arXiv Detail & Related papers (2020-10-06T18:32:31Z) - Opening the Software Engineering Toolbox for the Assessment of
Trustworthy AI [17.910325223647362]
We argue for the application of software engineering and testing practices for the assessment of trustworthy AI.
We make the connection between the seven key requirements as defined by the European Commission's AI high-level expert group.
arXiv Detail & Related papers (2020-07-14T08:16:15Z) - Technology Readiness Levels for AI & ML [79.22051549519989]
Development of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
Engineering systems follow well-defined processes and testing standards to streamline development for high-quality, reliable results.
We propose a proven systems engineering approach for machine learning development and deployment.
arXiv Detail & Related papers (2020-06-21T17:14:34Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.