Human Oversight of Artificial Intelligence and Technical Standardisation
- URL: http://arxiv.org/abs/2407.17481v1
- Date: Tue, 2 Jul 2024 07:43:46 GMT
- Title: Human Oversight of Artificial Intelligence and Technical Standardisation
- Authors: Marion Ho-Dac, Baptiste Martinez,
- Abstract summary: Within the global governance of AI, the requirement for human oversight is embodied in several regulatory formats.
The EU legislator is therefore going much further than in the past in "spelling out" the legal requirement for human oversight.
The question of the place of humans in the AI decision-making process should be given particular attention.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The adoption of human oversight measures makes it possible to regulate, to varying degrees and in different ways, the decision-making process of Artificial Intelligence (AI) systems, for example by placing a human being in charge of supervising the system and, upstream, by developing the AI system to enable such supervision. Within the global governance of AI, the requirement for human oversight is embodied in several regulatory formats, within a diversity of normative sources. On the one hand, it reinforces the accountability of AI systems' users (for example, by requiring them to carry out certain checks) and, on the other hand, it better protects the individuals affected by the AI-based decision (for example, by allowing them to request a review of the decision). In the European context, the AI Act imposes obligations on providers of high-risk AI systems (and to some extent also on professional users of these systems, known as deployers), including the introduction of human oversight tools throughout the life cycle of AI systems, including by design (and their implementation by deployers). The EU legislator is therefore going much further than in the past in "spelling out" the legal requirement for human oversight. But it does not intend to provide for all implementation details; it calls on standardisation to technically flesh out this requirement (and more broadly all the requirements of section 2 of chapter III) on the basis of article 40 of the AI Act. In this multi-level regulatory context, the question of the place of humans in the AI decision-making process should be given particular attention. Indeed, depending on whether it is the law or the technical standard that sets the contours of human oversight, the "regulatory governance" of AI is not the same: its nature, content and scope are different. This analysis is at the heart of the contribution made (or to be made) by legal experts to the central reflection on the most appropriate regulatory governance -- in terms of both its institutional format and its substance -- to ensure the effectiveness of human oversight and AI trustworthiness.
Related papers
- Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Responsible Artificial Intelligence: A Structured Literature Review [0.0]
The EU has recently issued several publications emphasizing the necessity of trust in AI.
This highlights the urgent need for international regulation.
This paper introduces a comprehensive and, to our knowledge, the first unified definition of responsible AI.
arXiv Detail & Related papers (2024-03-11T17:01:13Z) - The European Commitment to Human-Centered Technology: The Integral Role of HCI in the EU AI Act's Success [4.202570851109354]
The EU has enacted the AI Act, regulating market access for AI-based systems.
The Act focuses regulation on transparency, explainability, and the human ability to understand and control AI systems.
The EU issues a democratic call for human-centered AI systems and, in turn, an interdisciplinary research agenda for human-centered innovation in AI development.
arXiv Detail & Related papers (2024-02-22T17:35:29Z) - Taking Training Seriously: Human Guidance and Management-Based Regulation of Artificial Intelligence [0.0]
We discuss the connection between the emerging management-based regulatory frameworks governing AI and the need for human oversight during training.
We argue that the kinds of high-stakes use cases for AI that appear of most concern to regulators should lean more on human-guided training than on data-only training.
arXiv Detail & Related papers (2024-02-13T13:48:54Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Connecting the Dots in Trustworthy Artificial Intelligence: From AI
Principles, Ethics, and Key Requirements to Responsible AI Systems and
Regulation [22.921683578188645]
We argue that attaining truly trustworthy AI concerns the trustworthiness of all processes and actors that are part of the system's life cycle.
A more holistic vision contemplates four essential axes: the global principles for ethical use and development of AI-based systems, a philosophical take on AI ethics, and a risk-based approach to AI regulation.
Our multidisciplinary vision of trustworthy AI culminates in a debate on the diverging views published lately about the future of AI.
arXiv Detail & Related papers (2023-05-02T09:49:53Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Meaningful human control over AI systems: beyond talking the talk [8.351027101823705]
We identify four properties which AI-based systems must have to be under meaningful human control.
First, a system in which humans and AI algorithms interact should have an explicitly defined domain of morally loaded situations.
Second, humans and AI agents within the system should have appropriate and mutually compatible representations.
Third, responsibility attributed to a human should be commensurate with that human's ability and authority to control the system.
arXiv Detail & Related papers (2021-11-25T11:05:37Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z) - Hacia los Comit\'es de \'Etica en Inteligencia Artificial [68.8204255655161]
It is priority to create the rules and specialized organizations that can oversight the following of such rules.
This work proposes the creation, at the universities, of Ethical Committees or Commissions specialized on Artificial Intelligence.
arXiv Detail & Related papers (2020-02-11T23:48:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.