Understanding the Role of Human Intuition on Reliance in Human-AI
Decision-Making with Explanations
- URL: http://arxiv.org/abs/2301.07255v3
- Date: Wed, 14 Jun 2023 14:50:39 GMT
- Title: Understanding the Role of Human Intuition on Reliance in Human-AI
Decision-Making with Explanations
- Authors: Valerie Chen, Q. Vera Liao, Jennifer Wortman Vaughan, Gagan Bansal
- Abstract summary: We study how decision-makers' intuition affects their use of AI predictions and explanations.
Our results identify three types of intuition involved in reasoning about AI predictions and explanations.
We use these pathways to explain why feature-based explanations did not improve participants' decision outcomes and increased their overreliance on AI.
- Score: 44.01143305912054
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI explanations are often mentioned as a way to improve human-AI
decision-making, but empirical studies have not found consistent evidence of
explanations' effectiveness and, on the contrary, suggest that they can
increase overreliance when the AI system is wrong. While many factors may
affect reliance on AI support, one important factor is how decision-makers
reconcile their own intuition -- beliefs or heuristics, based on prior
knowledge, experience, or pattern recognition, used to make judgments -- with
the information provided by the AI system to determine when to override AI
predictions. We conduct a think-aloud, mixed-methods study with two explanation
types (feature- and example-based) for two prediction tasks to explore how
decision-makers' intuition affects their use of AI predictions and
explanations, and ultimately their choice of when to rely on AI. Our results
identify three types of intuition involved in reasoning about AI predictions
and explanations: intuition about the task outcome, features, and AI
limitations. Building on these, we summarize three observed pathways for
decision-makers to apply their own intuition and override AI predictions. We
use these pathways to explain why (1) the feature-based explanations we used
did not improve participants' decision outcomes and increased their
overreliance on AI, and (2) the example-based explanations we used improved
decision-makers' performance over feature-based explanations and helped achieve
complementary human-AI performance. Overall, our work identifies directions for
further development of AI decision-support systems and explanation methods that
help decision-makers effectively apply their intuition to achieve appropriate
reliance on AI.
Related papers
- Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills [24.04643864795939]
People's decision-making abilities often fail to improve when they rely on AI for decision-support.
Most AI systems offer "unilateral" explanations that justify the AI's decision but do not account for users' thinking.
We introduce a framework for generating human-centered contrastive explanations that explain the difference between AI's choice and a predicted, likely human choice.
arXiv Detail & Related papers (2024-10-05T18:21:04Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Understanding the Effect of Counterfactual Explanations on Trust and
Reliance on AI for Human-AI Collaborative Clinical Decision Making [5.381004207943597]
We conducted an experiment with seven therapists and ten laypersons on the task of assessing post-stroke survivors' quality of motion.
We analyzed their performance, agreement level on the task, and reliance on AI without and with two types of AI explanations.
Our work discusses the potential of counterfactual explanations to better estimate the accuracy of an AI model and reduce over-reliance on wrong' AI outputs.
arXiv Detail & Related papers (2023-08-08T16:23:46Z) - In Search of Verifiability: Explanations Rarely Enable Complementary
Performance in AI-Advised Decision Making [25.18203172421461]
We argue explanations are only useful to the extent that they allow a human decision maker to verify the correctness of an AI's prediction.
We also compare the objective of complementary performance with that of appropriate reliance, decomposing the latter into the notions of outcome-graded and strategy-graded reliance.
arXiv Detail & Related papers (2023-05-12T18:28:04Z) - Selective Explanations: Leveraging Human Input to Align Explainable AI [40.33998268146951]
We propose a general framework for generating selective explanations by leveraging human input on a small sample.
As a showcase, we use a decision-support task to explore selective explanations based on what the decision-maker would consider relevant to the decision task.
Our experiments demonstrate the promise of selective explanations in reducing over-reliance on AI.
arXiv Detail & Related papers (2023-01-23T19:00:02Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - To Trust or to Think: Cognitive Forcing Functions Can Reduce
Overreliance on AI in AI-assisted Decision-making [4.877174544937129]
People supported by AI-powered decision support tools frequently overrely on the AI.
Adding explanations to the AI decisions does not appear to reduce the overreliance.
Our research suggests that human cognitive motivation moderates the effectiveness of explainable AI solutions.
arXiv Detail & Related papers (2021-02-19T00:38:53Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.