In Search of Verifiability: Explanations Rarely Enable Complementary
Performance in AI-Advised Decision Making
- URL: http://arxiv.org/abs/2305.07722v4
- Date: Thu, 1 Feb 2024 23:05:51 GMT
- Title: In Search of Verifiability: Explanations Rarely Enable Complementary
Performance in AI-Advised Decision Making
- Authors: Raymond Fok, Daniel S. Weld
- Abstract summary: We argue explanations are only useful to the extent that they allow a human decision maker to verify the correctness of an AI's prediction.
We also compare the objective of complementary performance with that of appropriate reliance, decomposing the latter into the notions of outcome-graded and strategy-graded reliance.
- Score: 25.18203172421461
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The current literature on AI-advised decision making -- involving explainable
AI systems advising human decision makers -- presents a series of inconclusive
and confounding results. To synthesize these findings, we propose a simple
theory that elucidates the frequent failure of AI explanations to engender
appropriate reliance and complementary decision making performance. We argue
explanations are only useful to the extent that they allow a human decision
maker to verify the correctness of an AI's prediction, in contrast to other
desiderata, e.g., interpretability or spelling out the AI's reasoning process.
Prior studies find in many decision making contexts AI explanations do not
facilitate such verification. Moreover, most tasks fundamentally do not allow
easy verification, regardless of explanation method, limiting the potential
benefit of any type of explanation. We also compare the objective of
complementary performance with that of appropriate reliance, decomposing the
latter into the notions of outcome-graded and strategy-graded reliance.
Related papers
- Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills [24.04643864795939]
People's decision-making abilities often fail to improve when they rely on AI for decision-support.
Most AI systems offer "unilateral" explanations that justify the AI's decision but do not account for users' thinking.
We introduce a framework for generating human-centered contrastive explanations that explain the difference between AI's choice and a predicted, likely human choice.
arXiv Detail & Related papers (2024-10-05T18:21:04Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making [47.33241893184721]
In AI-assisted decision-making, humans often passively review AI's suggestion and decide whether to accept or reject it as a whole.
We propose Human-AI Deliberation, a novel framework to promote human reflection and discussion on conflicting human-AI opinions in decision-making.
Based on theories in human deliberation, this framework engages humans and AI in dimension-level opinion elicitation, deliberative discussion, and decision updates.
arXiv Detail & Related papers (2024-03-25T14:34:06Z) - AI Reliance and Decision Quality: Fundamentals, Interdependence, and the Effects of Interventions [6.356355538824237]
We argue that reliance and decision quality are often inappropriately conflated in the current literature on AI-assisted decision-making.
Our research highlights the importance of distinguishing between reliance behavior and decision quality in AI-assisted decision-making.
arXiv Detail & Related papers (2023-04-18T08:08:05Z) - Selective Explanations: Leveraging Human Input to Align Explainable AI [40.33998268146951]
We propose a general framework for generating selective explanations by leveraging human input on a small sample.
As a showcase, we use a decision-support task to explore selective explanations based on what the decision-maker would consider relevant to the decision task.
Our experiments demonstrate the promise of selective explanations in reducing over-reliance on AI.
arXiv Detail & Related papers (2023-01-23T19:00:02Z) - Understanding the Role of Human Intuition on Reliance in Human-AI
Decision-Making with Explanations [44.01143305912054]
We study how decision-makers' intuition affects their use of AI predictions and explanations.
Our results identify three types of intuition involved in reasoning about AI predictions and explanations.
We use these pathways to explain why feature-based explanations did not improve participants' decision outcomes and increased their overreliance on AI.
arXiv Detail & Related papers (2023-01-18T01:33:50Z) - On the Relationship Between Explanations, Fairness Perceptions, and
Decisions [2.5372245630249632]
It is known that recommendations of AI-based systems can be incorrect or unfair.
It is often proposed that a human be the final decision-maker.
Prior work has argued that explanations are an essential pathway to help human decision-makers enhance decision quality.
arXiv Detail & Related papers (2022-04-27T19:33:36Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Decision Rule Elicitation for Domain Adaptation [93.02675868486932]
Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
arXiv Detail & Related papers (2021-02-23T08:07:22Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.