Explanations Can Reduce Overreliance on AI Systems During
Decision-Making
- URL: http://arxiv.org/abs/2212.06823v1
- Date: Tue, 13 Dec 2022 18:59:31 GMT
- Title: Explanations Can Reduce Overreliance on AI Systems During
Decision-Making
- Authors: Helena Vasconcelos, Matthew J\"orke, Madeleine Grunde-McLaughlin,
Tobias Gerstenberg, Michael Bernstein, and Ranjay Krishna
- Abstract summary: We show that people strategically choose whether or not to engage with an AI explanation, demonstrating that there are scenarios where AI explanations reduce overreliance.
We manipulate the costs and benefits in a maze task, where participants collaborate with a simulated AI to find the exit of a maze.
Our results suggest that some of the null effects found in literature could be due in part to the explanation not sufficiently reducing the costs of verifying the AI's prediction.
- Score: 12.652229245306671
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prior work has identified a resilient phenomenon that threatens the
performance of human-AI decision-making teams: overreliance, when people agree
with an AI, even when it is incorrect. Surprisingly, overreliance does not
reduce when the AI produces explanations for its predictions, compared to only
providing predictions. Some have argued that overreliance results from
cognitive biases or uncalibrated trust, attributing overreliance to an
inevitability of human cognition. By contrast, our paper argues that people
strategically choose whether or not to engage with an AI explanation,
demonstrating empirically that there are scenarios where AI explanations reduce
overreliance. To achieve this, we formalize this strategic choice in a
cost-benefit framework, where the costs and benefits of engaging with the task
are weighed against the costs and benefits of relying on the AI. We manipulate
the costs and benefits in a maze task, where participants collaborate with a
simulated AI to find the exit of a maze. Through 5 studies (N = 731), we find
that costs such as task difficulty (Study 1), explanation difficulty (Study 2,
3), and benefits such as monetary compensation (Study 4) affect overreliance.
Finally, Study 5 adapts the Cognitive Effort Discounting paradigm to quantify
the utility of different explanations, providing further support for our
framework. Our results suggest that some of the null effects found in
literature could be due in part to the explanation not sufficiently reducing
the costs of verifying the AI's prediction.
Related papers
- Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Understanding the Effect of Counterfactual Explanations on Trust and
Reliance on AI for Human-AI Collaborative Clinical Decision Making [5.381004207943597]
We conducted an experiment with seven therapists and ten laypersons on the task of assessing post-stroke survivors' quality of motion.
We analyzed their performance, agreement level on the task, and reliance on AI without and with two types of AI explanations.
Our work discusses the potential of counterfactual explanations to better estimate the accuracy of an AI model and reduce over-reliance on wrong' AI outputs.
arXiv Detail & Related papers (2023-08-08T16:23:46Z) - In Search of Verifiability: Explanations Rarely Enable Complementary
Performance in AI-Advised Decision Making [25.18203172421461]
We argue explanations are only useful to the extent that they allow a human decision maker to verify the correctness of an AI's prediction.
We also compare the objective of complementary performance with that of appropriate reliance, decomposing the latter into the notions of outcome-graded and strategy-graded reliance.
arXiv Detail & Related papers (2023-05-12T18:28:04Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Understanding the Role of Human Intuition on Reliance in Human-AI
Decision-Making with Explanations [44.01143305912054]
We study how decision-makers' intuition affects their use of AI predictions and explanations.
Our results identify three types of intuition involved in reasoning about AI predictions and explanations.
We use these pathways to explain why feature-based explanations did not improve participants' decision outcomes and increased their overreliance on AI.
arXiv Detail & Related papers (2023-01-18T01:33:50Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - To Trust or to Think: Cognitive Forcing Functions Can Reduce
Overreliance on AI in AI-assisted Decision-making [4.877174544937129]
People supported by AI-powered decision support tools frequently overrely on the AI.
Adding explanations to the AI decisions does not appear to reduce the overreliance.
Our research suggests that human cognitive motivation moderates the effectiveness of explainable AI solutions.
arXiv Detail & Related papers (2021-02-19T00:38:53Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z) - Learning from Learning Machines: Optimisation, Rules, and Social Norms [91.3755431537592]
It appears that the area of AI that is most analogous to the behaviour of economic entities is that of morally good decision-making.
Recent successes of deep learning for AI suggest that more implicit specifications work better than explicit ones for solving such problems.
arXiv Detail & Related papers (2019-12-29T17:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.