Selective Explanations: Leveraging Human Input to Align Explainable AI
- URL: http://arxiv.org/abs/2301.09656v3
- Date: Mon, 7 Aug 2023 17:40:40 GMT
- Title: Selective Explanations: Leveraging Human Input to Align Explainable AI
- Authors: Vivian Lai, Yiming Zhang, Chacha Chen, Q. Vera Liao, Chenhao Tan
- Abstract summary: We propose a general framework for generating selective explanations by leveraging human input on a small sample.
As a showcase, we use a decision-support task to explore selective explanations based on what the decision-maker would consider relevant to the decision task.
Our experiments demonstrate the promise of selective explanations in reducing over-reliance on AI.
- Score: 40.33998268146951
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While a vast collection of explainable AI (XAI) algorithms have been
developed in recent years, they are often criticized for significant gaps with
how humans produce and consume explanations. As a result, current XAI
techniques are often found to be hard to use and lack effectiveness. In this
work, we attempt to close these gaps by making AI explanations selective -- a
fundamental property of human explanations -- by selectively presenting a
subset from a large set of model reasons based on what aligns with the
recipient's preferences. We propose a general framework for generating
selective explanations by leveraging human input on a small sample. This
framework opens up a rich design space that accounts for different selectivity
goals, types of input, and more. As a showcase, we use a decision-support task
to explore selective explanations based on what the decision-maker would
consider relevant to the decision task. We conducted two experimental studies
to examine three out of a broader possible set of paradigms based on our
proposed framework: in Study 1, we ask the participants to provide their own
input to generate selective explanations, with either open-ended or
critique-based input. In Study 2, we show participants selective explanations
based on input from a panel of similar users (annotators). Our experiments
demonstrate the promise of selective explanations in reducing over-reliance on
AI and improving decision outcomes and subjective perceptions of the AI, but
also paint a nuanced picture that attributes some of these positive effects to
the opportunity to provide one's own input to augment AI explanations. Overall,
our work proposes a novel XAI framework inspired by human communication
behaviors and demonstrates its potentials to encourage future work to better
align AI explanations with human production and consumption of explanations.
Related papers
- Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills [24.04643864795939]
People's decision-making abilities often fail to improve when they rely on AI for decision-support.
Most AI systems offer "unilateral" explanations that justify the AI's decision but do not account for users' thinking.
We introduce a framework for generating human-centered contrastive explanations that explain the difference between AI's choice and a predicted, likely human choice.
arXiv Detail & Related papers (2024-10-05T18:21:04Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - In Search of Verifiability: Explanations Rarely Enable Complementary
Performance in AI-Advised Decision Making [25.18203172421461]
We argue explanations are only useful to the extent that they allow a human decision maker to verify the correctness of an AI's prediction.
We also compare the objective of complementary performance with that of appropriate reliance, decomposing the latter into the notions of outcome-graded and strategy-graded reliance.
arXiv Detail & Related papers (2023-05-12T18:28:04Z) - Understanding the Role of Human Intuition on Reliance in Human-AI
Decision-Making with Explanations [44.01143305912054]
We study how decision-makers' intuition affects their use of AI predictions and explanations.
Our results identify three types of intuition involved in reasoning about AI predictions and explanations.
We use these pathways to explain why feature-based explanations did not improve participants' decision outcomes and increased their overreliance on AI.
arXiv Detail & Related papers (2023-01-18T01:33:50Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Understanding the Effect of Out-of-distribution Examples and Interactive
Explanations on Human-AI Decision Making [19.157591744997355]
We argue that the typical experimental setup limits the potential of human-AI teams.
We develop novel interfaces to support interactive explanations so that humans can actively engage with AI assistance.
arXiv Detail & Related papers (2021-01-13T19:01:32Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.