On the Interdependence of Reliance Behavior and Accuracy in AI-Assisted
Decision-Making
- URL: http://arxiv.org/abs/2304.08804v1
- Date: Tue, 18 Apr 2023 08:08:05 GMT
- Title: On the Interdependence of Reliance Behavior and Accuracy in AI-Assisted
Decision-Making
- Authors: Jakob Schoeffer, Johannes Jakubik, Michael Voessing, Niklas Kuehl,
Gerhard Satzger
- Abstract summary: We analyze the interdependence between reliance behavior and accuracy in AI-assisted decision-making.
We propose a visual framework to make this interdependence more tangible.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In AI-assisted decision-making, a central promise of putting a human in the
loop is that they should be able to complement the AI system by adhering to its
correct and overriding its mistaken recommendations. In practice, however, we
often see that humans tend to over- or under-rely on AI recommendations,
meaning that they either adhere to wrong or override correct recommendations.
Such reliance behavior is detrimental to decision-making accuracy. In this
work, we articulate and analyze the interdependence between reliance behavior
and accuracy in AI-assisted decision-making, which has been largely neglected
in prior work. We also propose a visual framework to make this interdependence
more tangible. This framework helps us interpret and compare empirical
findings, as well as obtain a nuanced understanding of the effects of
interventions (e.g., explanations) in AI-assisted decision-making. Finally, we
infer several interesting properties from the framework: (i) when humans
under-rely on AI recommendations, there may be no possibility for them to
complement the AI in terms of decision-making accuracy; (ii) when humans cannot
discern correct and wrong AI recommendations, no such improvement can be
expected either; (iii) interventions may lead to an increase in decision-making
accuracy that is solely driven by an increase in humans' adherence to AI
recommendations, without any ability to discern correct and wrong. Our work
emphasizes the importance of measuring and reporting both effects on accuracy
and reliance behavior when empirically assessing interventions.
Related papers
- Human-Alignment Influences the Utility of AI-assisted Decision Making [16.732483972136418]
We investigate what extent the degree of alignment actually influences the utility of AI-assisted decision making.
Our results show a positive association between the degree of alignment and the utility of AI-assisted decision making.
arXiv Detail & Related papers (2025-01-23T19:01:47Z) - How Performance Pressure Influences AI-Assisted Decision Making [57.53469908423318]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.
Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making [47.33241893184721]
In AI-assisted decision-making, humans often passively review AI's suggestion and decide whether to accept or reject it as a whole.
We propose Human-AI Deliberation, a novel framework to promote human reflection and discussion on conflicting human-AI opinions in decision-making.
Based on theories in human deliberation, this framework engages humans and AI in dimension-level opinion elicitation, deliberative discussion, and decision updates.
arXiv Detail & Related papers (2024-03-25T14:34:06Z) - Beyond Recommender: An Exploratory Study of the Effects of Different AI
Roles in AI-Assisted Decision Making [48.179458030691286]
We examine three AI roles: Recommender, Analyzer, and Devil's Advocate.
Our results show each role's distinct strengths and limitations in task performance, reliance appropriateness, and user experience.
These insights offer valuable implications for designing AI assistants with adaptive functional roles according to different situations.
arXiv Detail & Related papers (2024-03-04T07:32:28Z) - A Decision Theoretic Framework for Measuring AI Reliance [23.353778024330165]
Humans frequently make decisions with the aid of artificially intelligent (AI) systems.
Researchers have identified ensuring that a human has appropriate reliance on an AI as a critical component of achieving complementary performance.
We propose a formal definition of reliance, based on statistical decision theory, which separates the concepts of reliance as the probability the decision-maker follows the AI's recommendation.
arXiv Detail & Related papers (2024-01-27T09:13:09Z) - Does More Advice Help? The Effects of Second Opinions in AI-Assisted
Decision Making [45.20615051119694]
We explore whether and how the provision of second opinions may affect decision-makers' behavior and performance in AI-assisted decision-making.
We find that if both the AI model's decision recommendation and a second opinion are always presented together, decision-makers reduce their over-reliance on AI.
If decision-makers have the control to decide when to solicit a peer's second opinion, we find that their active solicitations of second opinions have the potential to mitigate over-reliance on AI.
arXiv Detail & Related papers (2024-01-13T12:19:01Z) - In Search of Verifiability: Explanations Rarely Enable Complementary
Performance in AI-Advised Decision Making [25.18203172421461]
We argue explanations are only useful to the extent that they allow a human decision maker to verify the correctness of an AI's prediction.
We also compare the objective of complementary performance with that of appropriate reliance, decomposing the latter into the notions of outcome-graded and strategy-graded reliance.
arXiv Detail & Related papers (2023-05-12T18:28:04Z) - Artificial Artificial Intelligence: Measuring Influence of AI
'Assessments' on Moral Decision-Making [48.66982301902923]
We examined the effect of feedback from false AI on moral decision-making about donor kidney allocation.
We found some evidence that judgments about whether a patient should receive a kidney can be influenced by feedback about participants' own decision-making perceived to be given by AI.
arXiv Detail & Related papers (2020-01-13T14:15:18Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.