Monitoring Human Dependence On AI Systems With Reliance Drills
- URL: http://arxiv.org/abs/2409.14055v2
- Date: Thu, 26 Sep 2024 10:54:20 GMT
- Title: Monitoring Human Dependence On AI Systems With Reliance Drills
- Authors: Rosco Hunter, Richard Moulange, Jamie Bernardi, Merlin Stein,
- Abstract summary: Humans could be over-reliant on AI systems if they trust AI-generated advice, even though they would make a better decision on their own.
This paper proposes the reliance drill: an exercise that tests whether a human can recognise mistakes in AI-generated advice.
We argue that reliance drills could become a key tool for ensuring humans remain appropriately involved in AI-assisted decisions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI systems are assisting humans with an increasingly broad range of intellectual tasks. Humans could be over-reliant on this assistance if they trust AI-generated advice, even though they would make a better decision on their own. To identify real-world instances of over-reliance, this paper proposes the reliance drill: an exercise that tests whether a human can recognise mistakes in AI-generated advice. We introduce a pipeline that organisations could use to implement these drills. As an example, we explain how this approach could be used to limit over-reliance on AI in a medical setting. We conclude by arguing that reliance drills could become a key tool for ensuring humans remain appropriately involved in AI-assisted decisions.
Related papers
- Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Training Towards Critical Use: Learning to Situate AI Predictions
Relative to Human Knowledge [22.21959942886099]
We introduce a process-oriented notion of appropriate reliance called critical use that centers the human's ability to situate AI predictions against knowledge that is uniquely available to them but unavailable to the AI model.
We conduct a randomized online experiment in a complex social decision-making setting: child maltreatment screening.
We find that, by providing participants with accelerated, low-stakes opportunities to practice AI-assisted decision-making, novices came to exhibit patterns of disagreement with AI that resemble those of experienced workers.
arXiv Detail & Related papers (2023-08-30T01:54:31Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Knowing About Knowing: An Illusion of Human Competence Can Hinder
Appropriate Reliance on AI Systems [13.484359389266864]
This paper addresses whether the Dunning-Kruger Effect (DKE) can hinder appropriate reliance on AI systems.
DKE is a metacognitive bias due to which less-competent individuals overestimate their own skill and performance.
We found that participants who overestimate their performance tend to exhibit under-reliance on AI systems.
arXiv Detail & Related papers (2023-01-25T14:26:10Z) - Holding AI to Account: Challenges for the Delivery of Trustworthy AI in
Healthcare [8.351355707564153]
We examine the problem of trustworthy AI and explore what delivering this means in practice.
We argue here that this overlooks the important part played by organisational accountability in how people reason about and trust AI in socio-technical settings.
arXiv Detail & Related papers (2022-11-29T18:22:23Z) - Should I Follow AI-based Advice? Measuring Appropriate Reliance in
Human-AI Decision-Making [0.0]
We aim to enable humans not to rely on AI advice blindly but rather to distinguish its quality and act upon it to make better decisions.
Current research lacks a metric for appropriate reliance (AR) on AI advice on a case-by-case basis.
We propose to view AR as a two-dimensional construct that measures the ability to discriminate advice quality and behave accordingly.
arXiv Detail & Related papers (2022-04-14T12:18:51Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous
Algorithmic Scores [85.12096045419686]
We study the adoption of an algorithmic tool used to assist child maltreatment hotline screening decisions.
We first show that humans do alter their behavior when the tool is deployed.
We show that humans are less likely to adhere to the machine's recommendation when the score displayed is an incorrect estimate of risk.
arXiv Detail & Related papers (2020-02-19T07:27:32Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.