Bidirectional human-AI collaboration in brain tumour assessments improves both expert human and AI agent performance
- URL: http://arxiv.org/abs/2512.19707v1
- Date: Sat, 13 Dec 2025 18:56:50 GMT
- Title: Bidirectional human-AI collaboration in brain tumour assessments improves both expert human and AI agent performance
- Authors: James K Ruffle, Samia Mohinta, Guilherme Pombo, Asthik Biswas, Alan Campbell, Indran Davagnanam, David Doig, Ahmed Hamman, Harpreet Hyare, Farrah Jabeen, Emma Lim, Dermot Mallon, Stephanie Owen, Sophie Wilkinson, Sebastian Brandner, Parashkev Nachev,
- Abstract summary: Human-AI partnerships improve accuracy and metacognitive ability not only for radiologists supported by AI, but also for AI agents supported by radiologists.<n>Our work suggests that the maximal value of AI in healthcare could emerge not from replacing human intelligence, but from AI agents that routinely leverage and amplify it.
- Score: 0.617129457697534
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The benefits of artificial intelligence (AI) human partnerships-evaluating how AI agents enhance expert human performance-are increasingly studied. Though rarely evaluated in healthcare, an inverse approach is possible: AI benefiting from the support of an expert human agent. Here, we investigate both human-AI clinical partnership paradigms in the magnetic resonance imaging-guided characterisation of patients with brain tumours. We reveal that human-AI partnerships improve accuracy and metacognitive ability not only for radiologists supported by AI, but also for AI agents supported by radiologists. Moreover, the greatest patient benefit was evident with an AI agent supported by a human one. Synergistic improvements in agent accuracy, metacognitive performance, and inter-rater agreement suggest that AI can create more capable, confident, and consistent clinical agents, whether human or model-based. Our work suggests that the maximal value of AI in healthcare could emerge not from replacing human intelligence, but from AI agents that routinely leverage and amplify it.
Related papers
- Align When They Want, Complement When They Need! Human-Centered Ensembles for Adaptive Human-AI Collaboration [13.041288521972563]
In human-AI decision making, designing AI that complements human expertise has been a natural strategy to enhance human-AI collaboration.<n>An aligned AI fosters trust yet risks reinforcing suboptimal human behavior and lowering human-AI team performance.<n>We introduce a novel human-centered adaptive AI ensemble that strategically toggles between two specialist AI models.
arXiv Detail & Related papers (2026-02-23T18:22:58Z) - Explainable AI as a Double-Edged Sword in Dermatology: The Impact on Clinicians versus The Public [46.86429592892395]
explainable AI (XAI) addresses this by providing AI decision-making insight.<n>We present results from two large-scale experiments combining a fairness-based diagnosis AI model and different XAI explanations.
arXiv Detail & Related papers (2025-12-14T00:06:06Z) - Can Domain Experts Rely on AI Appropriately? A Case Study on AI-Assisted Prostate Cancer MRI Diagnosis [19.73932120146401]
We conduct an in-depth collaboration with radiologists in prostate cancer diagnosis based on MRI images.<n>We develop an interface and conduct two experiments to study how AI assistance and performance feedback shape the decision making of domain experts.
arXiv Detail & Related papers (2025-02-03T18:59:38Z) - Unexploited Information Value in Human-AI Collaboration [23.353778024330165]
How to improve performance of a human-AI team is often not clear without knowing what particular information and strategies each agent employs.<n>We propose a model based in statistical decision theory to analyze human-AI collaboration.
arXiv Detail & Related papers (2024-11-03T01:34:45Z) - How Performance Pressure Influences AI-Assisted Decision Making [52.997197698288936]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.<n>Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Towards Effective Human-AI Decision-Making: The Role of Human Learning
in Appropriate Reliance on AI Advice [3.595471754135419]
We show the relationship between learning and appropriate reliance in an experiment with 100 participants.
This work provides fundamental concepts for analyzing reliance and derives implications for the effective design of human-AI decision-making.
arXiv Detail & Related papers (2023-10-03T14:51:53Z) - BO-Muse: A human expert and AI teaming framework for accelerated
experimental design [58.61002520273518]
Our algorithm lets the human expert take the lead in the experimental process.
We show that our algorithm converges sub-linearly, at a rate faster than the AI or human alone.
arXiv Detail & Related papers (2023-03-03T02:56:05Z) - On the Effect of Information Asymmetry in Human-AI Teams [0.0]
We focus on the existence of complementarity potential between humans and AI.
Specifically, we identify information asymmetry as an essential source of complementarity potential.
By conducting an online experiment, we demonstrate that humans can use such contextual information to adjust the AI's decision.
arXiv Detail & Related papers (2022-05-03T13:02:50Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.