On the Quest for Effectiveness in Human Oversight: Interdisciplinary Perspectives
- URL: http://arxiv.org/abs/2404.04059v2
- Date: Tue, 7 May 2024 14:36:54 GMT
- Title: On the Quest for Effectiveness in Human Oversight: Interdisciplinary Perspectives
- Authors: Sarah Sterz, Kevin Baum, Sebastian Biewer, Holger Hermanns, Anne Lauber-Rönsberg, Philip Meinel, Markus Langer,
- Abstract summary: Human oversight is currently discussed as a potential safeguard to counter some of the negative aspects of high-risk AI applications.
This paper investigates effective human oversight by synthesizing insights from psychological, legal, philosophical, and technical domains.
- Score: 1.29622145730471
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human oversight is currently discussed as a potential safeguard to counter some of the negative aspects of high-risk AI applications. This prompts a critical examination of the role and conditions necessary for what is prominently termed effective or meaningful human oversight of these systems. This paper investigates effective human oversight by synthesizing insights from psychological, legal, philosophical, and technical domains. Based on the claim that the main objective of human oversight is risk mitigation, we propose a viable understanding of effectiveness in human oversight: for human oversight to be effective, the oversight person has to have (a) sufficient causal power with regard to the system and its effects, (b) suitable epistemic access to relevant aspects of the situation, (c) self-control, and (d) fitting intentions for their role. Furthermore, we argue that this is equivalent to saying that an oversight person is effective if and only if they are morally responsible and have fitting intentions. Against this backdrop, we suggest facilitators and inhibitors of effectiveness in human oversight when striving for practical applicability. We discuss factors in three domains, namely, the technical design of the system, individual factors of oversight persons, and the environmental circumstances in which they operate. Finally, this paper scrutinizes the upcoming AI Act of the European Union -- in particular Article 14 on Human Oversight -- as an exemplary regulatory framework in which we study the practicality of our understanding of effective human oversight. By analyzing the provisions and implications of the European AI Act proposal, we pinpoint how far that proposal aligns with our analyses regarding effective human oversight as well as how it might get enriched by our conceptual understanding of effectiveness in human oversight.
Related papers
- Learning to Assist Humans without Inferring Rewards [65.28156318196397]
We build upon prior work that studies assistance through the lens of empowerment.
An assistive agent aims to maximize the influence of the human's actions.
We prove that these representations estimate a similar notion of empowerment to that studied by prior work.
arXiv Detail & Related papers (2024-11-04T21:31:04Z) - To Err Is AI! Debugging as an Intervention to Facilitate Appropriate Reliance on AI Systems [11.690126756498223]
Vision for optimal human-AI collaboration requires 'appropriate reliance' of humans on AI systems.
In practice, the performance disparity of machine learning models on out-of-distribution data makes dataset-specific performance feedback unreliable.
arXiv Detail & Related papers (2024-09-22T09:43:27Z) - An evidence-based methodology for human rights impact assessment (HRIA) in the development of AI data-intensive systems [49.1574468325115]
We show that human rights already underpin the decisions in the field of data use.
This work presents a methodology and a model for a Human Rights Impact Assessment (HRIA)
The proposed methodology is tested in concrete case-studies to prove its feasibility and effectiveness.
arXiv Detail & Related papers (2024-07-30T16:27:52Z) - Human Oversight of Artificial Intelligence and Technical Standardisation [0.0]
Within the global governance of AI, the requirement for human oversight is embodied in several regulatory formats.
The EU legislator is therefore going much further than in the past in "spelling out" the legal requirement for human oversight.
The question of the place of humans in the AI decision-making process should be given particular attention.
arXiv Detail & Related papers (2024-07-02T07:43:46Z) - ConSiDERS-The-Human Evaluation Framework: Rethinking Human Evaluation for Generative Large Language Models [53.00812898384698]
We argue that human evaluation of generative large language models (LLMs) should be a multidisciplinary undertaking.
We highlight how cognitive biases can conflate fluent information and truthfulness, and how cognitive uncertainty affects the reliability of rating scores such as Likert.
We propose the ConSiDERS-The-Human evaluation framework consisting of 6 pillars -- Consistency, Scoring Criteria, Differentiating, User Experience, Responsible, and Scalability.
arXiv Detail & Related papers (2024-05-28T22:45:28Z) - Towards Human-centered Proactive Conversational Agents [60.57226361075793]
The distinction between a proactive and a reactive system lies in the proactive system's initiative-taking nature.
We establish a new taxonomy concerning three key dimensions of human-centered PCAs, namely Intelligence, Adaptivity, and Civility.
arXiv Detail & Related papers (2024-04-19T07:14:31Z) - Beyond Recommender: An Exploratory Study of the Effects of Different AI
Roles in AI-Assisted Decision Making [48.179458030691286]
We examine three AI roles: Recommender, Analyzer, and Devil's Advocate.
Our results show each role's distinct strengths and limitations in task performance, reliance appropriateness, and user experience.
These insights offer valuable implications for designing AI assistants with adaptive functional roles according to different situations.
arXiv Detail & Related papers (2024-03-04T07:32:28Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - Planning for Proactive Assistance in Environments with Partial
Observability [26.895668587111757]
This paper addresses the problem of synthesizing the behavior of an AI agent that provides proactive task assistance to a human.
It is crucial for the agent to ensure that the human is aware of how the assistance affects her task.
arXiv Detail & Related papers (2021-05-02T18:12:06Z) - Avoiding Improper Treatment of Persons with Dementia by Care Robots [1.5156879440024376]
We focus in particular on exploring some potential dangers affecting persons with dementia.
We describe a proposed solution involving rich causal models and accountability measures.
Our aim is that the considerations raised could help inform the design of care robots intended to support well-being in PWD.
arXiv Detail & Related papers (2020-05-08T14:34:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.