Clinicians don't know what explanations they need: A case study on eliciting AI software explainability requirements
- URL: http://arxiv.org/abs/2501.09592v3
- Date: Wed, 22 Jan 2025 09:48:33 GMT
- Title: Clinicians don't know what explanations they need: A case study on eliciting AI software explainability requirements
- Authors: Tor Sporsem, Stine Rasdal Finserås, Inga Strümke,
- Abstract summary: This paper analyses how software developers elicit explainability requirements when creating a software application with an AI component.
Following a small software development team at a Norwegian hospital, we observe their process of simultaneously developing the AI application.
Since clinicians struggled to articulate their explainability needs before interacting with the system, an iterative approach proved effective.
- Score: 0.0
- License:
- Abstract: This paper analyses how software developers elicit explainability requirements when creating a software application with an AI component, through a case study using AI in the medical context of predicting cerebral palsy (CP) risk in infants. Following a small software development team at a Norwegian hospital, we observe their process of simultaneously developing the AI application and discovering what explanations clinicians require from the AI predictions. Since clinicians struggled to articulate their explainability needs before interacting with the system, an iterative approach proved effective: the team started with minimal explanations and refined these based on clinicians' responses during real patient examinations. Our preliminary findings from the first two iterations show that clinicians valued "interrogative explanations" - i.e., tools that let them explore and compare the AI predictions with their own assessments - over detailed technical explanations of the AI model's inner workings. Based on our analysis, we suggest that successful explainability requirements emerge through iterative collaboration between developers and users rather than being fully specified upfront. To the best of our knowledge, this is the first empirical case study on eliciting explainability requirements in software engineering.
Related papers
- 2-Factor Retrieval for Improved Human-AI Decision Making in Radiology [41.2574078312095]
This study compares previously used explainable AI techniques with a newly proposed technique termed '2-factor retrieval (2FR)'
2FR is a combination of interface design and search retrieval that returns similarly labeled data without processing this data.
We find that when tested on chest X-ray diagnoses, 2FR leads to increases in clinician accuracy, with particular improvements when clinicians are radiologists.
arXiv Detail & Related papers (2024-11-30T06:44:42Z) - Towards Next-Generation Medical Agent: How o1 is Reshaping Decision-Making in Medical Scenarios [46.729092855387165]
We study the choice of the backbone LLM for medical AI agents, which is the foundation for the agent's overall reasoning and action generation.
Our findings demonstrate o1's ability to enhance diagnostic accuracy and consistency, paving the way for smarter, more responsive AI tools.
arXiv Detail & Related papers (2024-11-16T18:19:53Z) - Contrasting Attitudes Towards Current and Future AI Applications for Computerised Interpretation of ECG: A Clinical Stakeholder Interview Study [2.570550251482137]
We conducted a series of interviews with clinicians in the UK.
Our study explores the potential for AI, specifically future 'human-like' computing.
arXiv Detail & Related papers (2024-10-22T10:31:23Z) - Challenges for Responsible AI Design and Workflow Integration in Healthcare: A Case Study of Automatic Feeding Tube Qualification in Radiology [35.284458448940796]
Nasogastric tubes (NGTs) are feeding tubes that are inserted through the nose into the stomach to deliver nutrition or medication.
Recent AI developments demonstrate the feasibility of robustly detecting NGT placement from Chest X-ray images.
We present a human-centered approach to the problem and describe insights derived following contextual inquiry and in-depth interviews with 15 clinical stakeholders.
arXiv Detail & Related papers (2024-05-08T14:16:22Z) - Transparent AI: Developing an Explainable Interface for Predicting Postoperative Complications [1.6609516435725236]
We propose an Explainable AI (XAI) framework designed to answer five critical questions.
We incorporated various techniques such as Local Interpretable Model-agnostic Explanations (LIME)
We showcased an XAI interface prototype that adheres to this framework for predicting major postoperative complications.
arXiv Detail & Related papers (2024-04-18T21:01:27Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Explainable AI applications in the Medical Domain: a systematic review [1.4419517737536707]
The field of Medical AI faces various challenges, in terms of building user trust, complying with regulations, using data ethically.
This paper presents a literature review on the recent developments of XAI solutions for medical decision support, based on a representative sample of 198 articles published in recent years.
arXiv Detail & Related papers (2023-08-10T08:12:17Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - VBridge: Connecting the Dots Between Features, Explanations, and Data
for Healthcare Models [85.4333256782337]
VBridge is a visual analytics tool that seamlessly incorporates machine learning explanations into clinicians' decision-making workflow.
We identified three key challenges, including clinicians' unfamiliarity with ML features, lack of contextual information, and the need for cohort-level evidence.
We demonstrated the effectiveness of VBridge through two case studies and expert interviews with four clinicians.
arXiv Detail & Related papers (2021-08-04T17:34:13Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.