Understanding the Effect of Counterfactual Explanations on Trust and
Reliance on AI for Human-AI Collaborative Clinical Decision Making
- URL: http://arxiv.org/abs/2308.04375v1
- Date: Tue, 8 Aug 2023 16:23:46 GMT
- Title: Understanding the Effect of Counterfactual Explanations on Trust and
Reliance on AI for Human-AI Collaborative Clinical Decision Making
- Authors: Min Hun Lee, Chong Jun Chew
- Abstract summary: We conducted an experiment with seven therapists and ten laypersons on the task of assessing post-stroke survivors' quality of motion.
We analyzed their performance, agreement level on the task, and reliance on AI without and with two types of AI explanations.
Our work discusses the potential of counterfactual explanations to better estimate the accuracy of an AI model and reduce over-reliance on wrong' AI outputs.
- Score: 5.381004207943597
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Artificial intelligence (AI) is increasingly being considered to assist human
decision-making in high-stake domains (e.g. health). However, researchers have
discussed an issue that humans can over-rely on wrong suggestions of the AI
model instead of achieving human AI complementary performance. In this work, we
utilized salient feature explanations along with what-if, counterfactual
explanations to make humans review AI suggestions more analytically to reduce
overreliance on AI and explored the effect of these explanations on trust and
reliance on AI during clinical decision-making. We conducted an experiment with
seven therapists and ten laypersons on the task of assessing post-stroke
survivors' quality of motion, and analyzed their performance, agreement level
on the task, and reliance on AI without and with two types of AI explanations.
Our results showed that the AI model with both salient features and
counterfactual explanations assisted therapists and laypersons to improve their
performance and agreement level on the task when `right' AI outputs are
presented. While both therapists and laypersons over-relied on `wrong' AI
outputs, counterfactual explanations assisted both therapists and laypersons to
reduce their over-reliance on `wrong' AI outputs by 21\% compared to salient
feature explanations. Specifically, laypersons had higher performance degrades
by 18.0 f1-score with salient feature explanations and 14.0 f1-score with
counterfactual explanations than therapists with performance degrades of 8.6
and 2.8 f1-scores respectively. Our work discusses the potential of
counterfactual explanations to better estimate the accuracy of an AI model and
reduce over-reliance on `wrong' AI outputs and implications for improving
human-AI collaborative decision-making.
Related papers
- Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Explainable AI Enhances Glaucoma Referrals, Yet the Human-AI Team Still Falls Short of the AI Alone [6.740852152639975]
We investigate how various AI explanations help providers distinguish between patients needing immediate or non-urgent specialist referrals.
We built explainable AI algorithms to predict glaucoma surgery needs from routine eyecare data as a proxy for identifying high-risk patients.
We incorporated intrinsic and post-hoc explainability and conducted an online study with optometrists to assess human-AI team performance.
arXiv Detail & Related papers (2024-05-24T03:01:20Z) - Understanding the Role of Human Intuition on Reliance in Human-AI
Decision-Making with Explanations [44.01143305912054]
We study how decision-makers' intuition affects their use of AI predictions and explanations.
Our results identify three types of intuition involved in reasoning about AI predictions and explanations.
We use these pathways to explain why feature-based explanations did not improve participants' decision outcomes and increased their overreliance on AI.
arXiv Detail & Related papers (2023-01-18T01:33:50Z) - Improving Human-AI Collaboration With Descriptions of AI Behavior [14.904401331154062]
People work with AI systems to improve their decision making, but often under- or over-rely on AI predictions and perform worse than they would have unassisted.
To help people appropriately rely on AI aids, we propose showing them behavior descriptions.
arXiv Detail & Related papers (2023-01-06T00:33:08Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - To Trust or to Think: Cognitive Forcing Functions Can Reduce
Overreliance on AI in AI-assisted Decision-making [4.877174544937129]
People supported by AI-powered decision support tools frequently overrely on the AI.
Adding explanations to the AI decisions does not appear to reduce the overreliance.
Our research suggests that human cognitive motivation moderates the effectiveness of explainable AI solutions.
arXiv Detail & Related papers (2021-02-19T00:38:53Z) - Does the Whole Exceed its Parts? The Effect of AI Explanations on
Complementary Team Performance [44.730580857733]
Prior studies observed improvements from explanations only when the AI, alone, outperformed both the human and the best team.
We conduct mixed-method user studies on three datasets, where an AI with accuracy comparable to humans helps participants solve a task.
We find explanations increase the chance that humans will accept the AI's recommendation, regardless of its correctness.
arXiv Detail & Related papers (2020-06-26T03:34:04Z) - Artificial Artificial Intelligence: Measuring Influence of AI
'Assessments' on Moral Decision-Making [48.66982301902923]
We examined the effect of feedback from false AI on moral decision-making about donor kidney allocation.
We found some evidence that judgments about whether a patient should receive a kidney can be influenced by feedback about participants' own decision-making perceived to be given by AI.
arXiv Detail & Related papers (2020-01-13T14:15:18Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.