Understanding how the use of AI decision support tools affect critical
thinking and over-reliance on technology by drug dispensers in Tanzania
- URL: http://arxiv.org/abs/2302.09487v2
- Date: Wed, 22 Feb 2023 05:18:08 GMT
- Title: Understanding how the use of AI decision support tools affect critical
thinking and over-reliance on technology by drug dispensers in Tanzania
- Authors: Ally Salim Jr, Megan Allen, Kelvin Mariki, Kevin James Masoy and
Jafary Liana
- Abstract summary: Drug shop dispensers were on AI-powered technologies when determining a differential diagnosis for a presented clinical case vignette.
We found that dispensers relied on the decision made by the AI 25 percent of the time, even when the AI provided no explanation for its decision.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The use of AI in healthcare is designed to improve care delivery and augment
the decisions of providers to enhance patient outcomes. When deployed in
clinical settings, the interaction between providers and AI is a critical
component for measuring and understanding the effectiveness of these digital
tools on broader health outcomes. Even in cases where AI algorithms have high
diagnostic accuracy, healthcare providers often still rely on their experience
and sometimes gut feeling to make a final decision. Other times, providers rely
unquestioningly on the outputs of the AI models, which leads to a concern about
over-reliance on the technology. The purpose of this research was to understand
how reliant drug shop dispensers were on AI-powered technologies when
determining a differential diagnosis for a presented clinical case vignette. We
explored how the drug dispensers responded to technology that is framed as
always correct in an attempt to measure whether they begin to rely on it
without any critical thought of their own. We found that dispensers relied on
the decision made by the AI 25 percent of the time, even when the AI provided
no explanation for its decision.
Related papers
- Interactive Example-based Explanations to Improve Health Professionals' Onboarding with AI for Human-AI Collaborative Decision Making [2.964175945467257]
A growing research explores the usage of AI explanations on user's decision phases for human-AI collaborative decision-making.
Previous studies found the issues of overreliance on wrong' AI outputs.
We propose interactive example-based explanations to improve health professionals' offboarding with AI.
arXiv Detail & Related papers (2024-09-24T07:20:09Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Explainable AI Enhances Glaucoma Referrals, Yet the Human-AI Team Still Falls Short of the AI Alone [6.740852152639975]
We investigate how various AI explanations help providers distinguish between patients needing immediate or non-urgent specialist referrals.
We built explainable AI algorithms to predict glaucoma surgery needs from routine eyecare data as a proxy for identifying high-risk patients.
We incorporated intrinsic and post-hoc explainability and conducted an online study with optometrists to assess human-AI team performance.
arXiv Detail & Related papers (2024-05-24T03:01:20Z) - The Limits of Perception: Analyzing Inconsistencies in Saliency Maps in XAI [0.0]
Explainable artificial intelligence (XAI) plays an indispensable role in demystifying the decision-making processes of AI.
As they operate as "black boxes," with their reasoning obscured and inaccessible, there's an increased risk of misdiagnosis.
This shift towards transparency is not just beneficial -- it's a critical step towards responsible AI integration in healthcare.
arXiv Detail & Related papers (2024-03-23T02:15:23Z) - FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare [73.78776682247187]
Concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI.
This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare.
arXiv Detail & Related papers (2023-08-11T10:49:05Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - Who Goes First? Influences of Human-AI Workflow on Decision Making in
Clinical Imaging [24.911186503082465]
This study explores the effects of providing AI assistance at the start of a diagnostic session in radiology versus after the radiologist has made a provisional decision.
We found that participants who are asked to register provisional responses in advance of reviewing AI inferences are less likely to agree with the AI regardless of whether the advice is accurate and, in instances of disagreement with the AI, are less likely to seek the second opinion of a colleague.
arXiv Detail & Related papers (2022-05-19T16:59:25Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Explainable AI for medical imaging: Explaining pneumothorax diagnoses
with Bayesian Teaching [4.707325679181196]
We introduce and evaluate explanations based on Bayesian Teaching.
We find that medical experts exposed to explanations successfully predict the AI's diagnostic decisions.
These results show that Explainable AI can be used to support human-AI collaboration in medical imaging.
arXiv Detail & Related papers (2021-06-08T20:49:11Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.