Who Goes First? Influences of Human-AI Workflow on Decision Making in
Clinical Imaging
- URL: http://arxiv.org/abs/2205.09696v1
- Date: Thu, 19 May 2022 16:59:25 GMT
- Title: Who Goes First? Influences of Human-AI Workflow on Decision Making in
Clinical Imaging
- Authors: Riccardo Fogliato, Shreya Chappidi, Matthew Lungren, Michael Fitzke,
Mark Parkinson, Diane Wilson, Paul Fisher, Eric Horvitz, Kori Inkpen, Besmira
Nushi
- Abstract summary: This study explores the effects of providing AI assistance at the start of a diagnostic session in radiology versus after the radiologist has made a provisional decision.
We found that participants who are asked to register provisional responses in advance of reviewing AI inferences are less likely to agree with the AI regardless of whether the advice is accurate and, in instances of disagreement with the AI, are less likely to seek the second opinion of a colleague.
- Score: 24.911186503082465
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Details of the designs and mechanisms in support of human-AI collaboration
must be considered in the real-world fielding of AI technologies. A critical
aspect of interaction design for AI-assisted human decision making are policies
about the display and sequencing of AI inferences within larger decision-making
workflows. We have a poor understanding of the influences of making AI
inferences available before versus after human review of a diagnostic task at
hand. We explore the effects of providing AI assistance at the start of a
diagnostic session in radiology versus after the radiologist has made a
provisional decision. We conducted a user study where 19 veterinary
radiologists identified radiographic findings present in patients' X-ray
images, with the aid of an AI tool. We employed two workflow configurations to
analyze (i) anchoring effects, (ii) human-AI team diagnostic performance and
agreement, (iii) time spent and confidence in decision making, and (iv)
perceived usefulness of the AI. We found that participants who are asked to
register provisional responses in advance of reviewing AI inferences are less
likely to agree with the AI regardless of whether the advice is accurate and,
in instances of disagreement with the AI, are less likely to seek the second
opinion of a colleague. These participants also reported the AI advice to be
less useful. Surprisingly, requiring provisional decisions on cases in advance
of the display of AI inferences did not lengthen the time participants spent on
the task. The study provides generalizable and actionable insights for the
deployment of clinical AI tools in human-in-the-loop systems and introduces a
methodology for studying alternative designs for human-AI collaboration. We
make our experimental platform available as open source to facilitate future
research on the influence of alternate designs on human-AI workflows.
Related papers
- Interactive Example-based Explanations to Improve Health Professionals' Onboarding with AI for Human-AI Collaborative Decision Making [2.964175945467257]
A growing research explores the usage of AI explanations on user's decision phases for human-AI collaborative decision-making.
Previous studies found the issues of overreliance on wrong' AI outputs.
We propose interactive example-based explanations to improve health professionals' offboarding with AI.
arXiv Detail & Related papers (2024-09-24T07:20:09Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making [47.33241893184721]
In AI-assisted decision-making, humans often passively review AI's suggestion and decide whether to accept or reject it as a whole.
We propose Human-AI Deliberation, a novel framework to promote human reflection and discussion on conflicting human-AI opinions in decision-making.
Based on theories in human deliberation, this framework engages humans and AI in dimension-level opinion elicitation, deliberative discussion, and decision updates.
arXiv Detail & Related papers (2024-03-25T14:34:06Z) - Beyond Recommender: An Exploratory Study of the Effects of Different AI
Roles in AI-Assisted Decision Making [48.179458030691286]
We examine three AI roles: Recommender, Analyzer, and Devil's Advocate.
Our results show each role's distinct strengths and limitations in task performance, reliance appropriateness, and user experience.
These insights offer valuable implications for designing AI assistants with adaptive functional roles according to different situations.
arXiv Detail & Related papers (2024-03-04T07:32:28Z) - Training Towards Critical Use: Learning to Situate AI Predictions
Relative to Human Knowledge [22.21959942886099]
We introduce a process-oriented notion of appropriate reliance called critical use that centers the human's ability to situate AI predictions against knowledge that is uniquely available to them but unavailable to the AI model.
We conduct a randomized online experiment in a complex social decision-making setting: child maltreatment screening.
We find that, by providing participants with accelerated, low-stakes opportunities to practice AI-assisted decision-making, novices came to exhibit patterns of disagreement with AI that resemble those of experienced workers.
arXiv Detail & Related papers (2023-08-30T01:54:31Z) - Understanding the Effect of Counterfactual Explanations on Trust and
Reliance on AI for Human-AI Collaborative Clinical Decision Making [5.381004207943597]
We conducted an experiment with seven therapists and ten laypersons on the task of assessing post-stroke survivors' quality of motion.
We analyzed their performance, agreement level on the task, and reliance on AI without and with two types of AI explanations.
Our work discusses the potential of counterfactual explanations to better estimate the accuracy of an AI model and reduce over-reliance on wrong' AI outputs.
arXiv Detail & Related papers (2023-08-08T16:23:46Z) - The Impact of Imperfect XAI on Human-AI Decision-Making [8.305869611846775]
We evaluate how incorrect explanations influence humans' decision-making behavior in a bird species identification task.
Our findings reveal the influence of imperfect XAI and humans' level of expertise on their reliance on AI and human-AI team performance.
arXiv Detail & Related papers (2023-07-25T15:19:36Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - Factors that influence the adoption of human-AI collaboration in
clinical decision-making [0.0]
We identify factors for the adoption of human-AI collaboration by conducting a series of semi-structured interviews with experts in the healthcare domain.
We identify six relevant adoption factors and highlight existing tensions between them and effective human-AI collaboration.
arXiv Detail & Related papers (2022-04-19T18:19:39Z) - Artificial Artificial Intelligence: Measuring Influence of AI
'Assessments' on Moral Decision-Making [48.66982301902923]
We examined the effect of feedback from false AI on moral decision-making about donor kidney allocation.
We found some evidence that judgments about whether a patient should receive a kidney can be influenced by feedback about participants' own decision-making perceived to be given by AI.
arXiv Detail & Related papers (2020-01-13T14:15:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.