Exploring the Requirements of Clinicians for Explainable AI Decision Support Systems in Intensive Care
- URL: http://arxiv.org/abs/2411.11774v1
- Date: Mon, 18 Nov 2024 17:53:07 GMT
- Title: Exploring the Requirements of Clinicians for Explainable AI Decision Support Systems in Intensive Care
- Authors: Jeffrey N. Clark, Matthew Wragg, Emily Nielsen, Miquel Perello-Nieto, Nawid Keshtmand, Michael Ambler, Shiv Sharma, Christopher P. Bourdeaux, Amberly Brigden, Raul Santos-Rodriguez,
- Abstract summary: Thematic analysis revealed three core themes: (T1) ICU decision-making relies on a wide range of factors, (T2) the complexity of patient state is challenging for shared decision-making, and (T3) requirements and capabilities of AI decision support systems.
We include design recommendations from clinical input, providing insights to inform future AI systems for intensive care.
- Score: 1.950650243134358
- License:
- Abstract: There is a growing need to understand how digital systems can support clinical decision-making, particularly as artificial intelligence (AI) models become increasingly complex and less human-interpretable. This complexity raises concerns about trustworthiness, impacting safe and effective adoption of such technologies. Improved understanding of decision-making processes and requirements for explanations coming from decision support tools is a vital component in providing effective explainable solutions. This is particularly relevant in the data-intensive, fast-paced environments of intensive care units (ICUs). To explore these issues, group interviews were conducted with seven ICU clinicians, representing various roles and experience levels. Thematic analysis revealed three core themes: (T1) ICU decision-making relies on a wide range of factors, (T2) the complexity of patient state is challenging for shared decision-making, and (T3) requirements and capabilities of AI decision support systems. We include design recommendations from clinical input, providing insights to inform future AI systems for intensive care.
Related papers
- A Survey on Human-Centered Evaluation of Explainable AI Methods in Clinical Decision Support Systems [45.89954090414204]
This paper provides a survey of human-centered evaluations of Explainable AI methods in Clinical Decision Support Systems.
Our findings reveal key challenges in the integration of XAI into healthcare and propose a structured framework to align the evaluation methods of XAI with the clinical needs of stakeholders.
arXiv Detail & Related papers (2025-02-14T01:21:29Z) - Towards Next-Generation Medical Agent: How o1 is Reshaping Decision-Making in Medical Scenarios [46.729092855387165]
We study the choice of the backbone LLM for medical AI agents, which is the foundation for the agent's overall reasoning and action generation.
Our findings demonstrate o1's ability to enhance diagnostic accuracy and consistency, paving the way for smarter, more responsive AI tools.
arXiv Detail & Related papers (2024-11-16T18:19:53Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - Designing Interpretable ML System to Enhance Trust in Healthcare: A Systematic Review to Proposed Responsible Clinician-AI-Collaboration Framework [13.215318138576713]
The paper reviews interpretable AI processes, methods, applications, and the challenges of implementation in healthcare.
It aims to foster a comprehensive understanding of the crucial role of a robust interpretability approach in healthcare.
arXiv Detail & Related papers (2023-11-18T12:29:18Z) - Clairvoyance: A Pipeline Toolkit for Medical Time Series [95.22483029602921]
Time-series learning is the bread and butter of data-driven *clinical decision support*
Clairvoyance proposes a unified, end-to-end, autoML-friendly pipeline that serves as a software toolkit.
Clairvoyance is the first to demonstrate viability of a comprehensive and automatable pipeline for clinical time-series ML.
arXiv Detail & Related papers (2023-10-28T12:08:03Z) - Applying Artificial Intelligence to Clinical Decision Support in Mental
Health: What Have We Learned? [0.0]
We present a case study of a recently developed AI-CDSS, Aifred Health, aimed at supporting the selection and management of treatment in major depressive disorder.
We consider both the principles espoused during development and testing of this AI-CDSS, as well as the practical solutions developed to facilitate implementation.
arXiv Detail & Related papers (2023-03-06T21:40:51Z) - The Medkit-Learn(ing) Environment: Medical Decision Modelling through
Simulation [81.72197368690031]
We present a new benchmarking suite designed specifically for medical sequential decision making.
The Medkit-Learn(ing) Environment is a publicly available Python package providing simple and easy access to high-fidelity synthetic medical data.
arXiv Detail & Related papers (2021-06-08T10:38:09Z) - Moral Decision-Making in Medical Hybrid Intelligent Systems: A Team
Design Patterns Approach to the Bias Mitigation and Data Sharing Design
Problems [0.0]
Team Design Patterns (TDPs) describe successful and reusable configurations of design problems in which decisions have a moral component.
This thesis describes a set of solutions for two design problems in a medical HI system.
A survey was created to assess the usability of the patterns on their understandability, effectiveness, and generalizability.
arXiv Detail & Related papers (2021-02-16T17:09:43Z) - Artificial Intelligence Decision Support for Medical Triage [0.0]
We developed a triage system, now certified and in use at the largest European telemedicine provider.
The system evaluates care alternatives through interactions with patients via a mobile application.
Reasoning on an initial set of provided symptoms, the triage application generates AI-powered, personalized questions to better characterize the problem.
arXiv Detail & Related papers (2020-11-09T16:45:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.