Automation Bias in the AI Act: On the Legal Implications of Attempting to De-Bias Human Oversight of AI
- URL: http://arxiv.org/abs/2502.10036v1
- Date: Fri, 14 Feb 2025 09:26:59 GMT
- Title: Automation Bias in the AI Act: On the Legal Implications of Attempting to De-Bias Human Oversight of AI
- Authors: Johann Laux, Hannah Ruschemeier,
- Abstract summary: This paper examines the legal implications of the explicit mentioning of automation bias (AB) in the Artificial Intelligence Act (AIA)
TheAIA mandates human oversight for high-risk AI systems and requires providers to enable awareness of AB.
- Score: 0.0
- License:
- Abstract: This paper examines the legal implications of the explicit mentioning of automation bias (AB) in the Artificial Intelligence Act (AIA). The AIA mandates human oversight for high-risk AI systems and requires providers to enable awareness of AB, i.e., the tendency to over-rely on AI outputs. The paper analyses how this extra-juridical concept is embedded in the AIA, the division of responsibility between AI providers and deployers, and the challenges of legally enforcing this novel awareness requirement. The analysis shows that the AIA's focus on providers does not adequately address design and context as causes of AB, and questions whether the AIA should directly regulate the risk of AB rather than just mandating awareness. As the AIA's approach requires a balance between legal mandates and behavioural science, the paper proposes that harmonised standards should reference the state of research on AB and human-AI interaction. Ultimately, further empirical research will be essential for effective safeguards.
Related papers
- The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots, legal obligations and key elements for a model template [55.2480439325792]
Article aims to fill existing gaps in the theoretical and methodological elaboration of the Fundamental Rights Impact Assessment (FRIA)
This article outlines the main building blocks of a model template for the FRIA.
It can serve as a blueprint for other national and international regulatory initiatives to ensure that AI is fully consistent with human rights.
arXiv Detail & Related papers (2024-11-07T11:55:55Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - How Could Generative AI Support Compliance with the EU AI Act? A Review for Safe Automated Driving Perception [4.075971633195745]
Deep Neural Networks (DNNs) have become central for the perception functions of autonomous vehicles.
The European Union (EU) Artificial Intelligence (AI) Act aims to address these challenges by establishing stringent norms and standards for AI systems.
This review paper summarizes the requirements arising from the EU AI Act regarding DNN-based perception systems and systematically categorizes existing generative AI applications in AD.
arXiv Detail & Related papers (2024-08-30T12:01:06Z) - Responsible Artificial Intelligence: A Structured Literature Review [0.0]
The EU has recently issued several publications emphasizing the necessity of trust in AI.
This highlights the urgent need for international regulation.
This paper introduces a comprehensive and, to our knowledge, the first unified definition of responsible AI.
arXiv Detail & Related papers (2024-03-11T17:01:13Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - Advancing Legal Reasoning: The Integration of AI to Navigate
Complexities and Biases in Global Jurisprudence with Semi-Automated
Arbitration Processes (SAAPs) [0.0]
This study focuses on the analysis of court judgments spanning five countries, including the United States, the United Kingdom, Rwanda, Sweden and Hong Kong.
By incorporating Advanced Language Models (ALMs) and a newly introduced human-AI collaborative framework, this paper seeks to analyze Grounded Theory-based research design with AI.
arXiv Detail & Related papers (2024-02-06T16:47:34Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Statutory Professions in AI governance and their consequences for
explainable AI [2.363388546004777]
Intentional and accidental harms arising from the use of AI have impacted the health, safety and rights of individuals.
We propose that a statutory profession framework be introduced as a necessary part of the AI regulatory framework.
arXiv Detail & Related papers (2023-06-15T08:51:28Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Conformity Assessments and Post-market Monitoring: A Guide to the Role
of Auditing in the Proposed European AI Regulation [0.0]
We describe and discuss the two primary enforcement mechanisms proposed in the European Artificial Intelligence Act.
We argue that AIA can be interpreted as a proposal to establish a Europe-wide ecosystem for conducting AI auditing.
arXiv Detail & Related papers (2021-11-09T11:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.