INTRPRT: A Systematic Review of and Guidelines for Designing and
Validating Transparent AI in Medical Image Analysis
- URL: http://arxiv.org/abs/2112.12596v1
- Date: Tue, 21 Dec 2021 05:14:44 GMT
- Title: INTRPRT: A Systematic Review of and Guidelines for Designing and
Validating Transparent AI in Medical Image Analysis
- Authors: Haomin Chen, Catalina Gomez, Chien-Ming Huang, Mathias Unberath
- Abstract summary: From a human-centered design perspective, transparency is not a property of the ML model but an affordance, i.e. a relationship between algorithm and user.
Following human-centered design principles in healthcare and medical image analysis is challenging due to the limited availability of and access to end users.
We introduce the INTRPRT guideline, a systematic design directive for transparent ML systems in medical image analysis.
- Score: 5.3613726625503215
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transparency in Machine Learning (ML), attempts to reveal the working
mechanisms of complex models. Transparent ML promises to advance human factors
engineering goals of human-centered AI in the target users. From a
human-centered design perspective, transparency is not a property of the ML
model but an affordance, i.e. a relationship between algorithm and user; as a
result, iterative prototyping and evaluation with users is critical to
attaining adequate solutions that afford transparency. However, following
human-centered design principles in healthcare and medical image analysis is
challenging due to the limited availability of and access to end users. To
investigate the state of transparent ML in medical image analysis, we conducted
a systematic review of the literature. Our review reveals multiple severe
shortcomings in the design and validation of transparent ML for medical image
analysis applications. We find that most studies to date approach transparency
as a property of the model itself, similar to task performance, without
considering end users during neither development nor evaluation. Additionally,
the lack of user research, and the sporadic validation of transparency claims
put contemporary research on transparent ML for medical image analysis at risk
of being incomprehensible to users, and thus, clinically irrelevant. To
alleviate these shortcomings in forthcoming research while acknowledging the
challenges of human-centered design in healthcare, we introduce the INTRPRT
guideline, a systematic design directive for transparent ML systems in medical
image analysis. The INTRPRT guideline suggests formative user research as the
first step of transparent model design to understand user needs and domain
requirements. Following this process produces evidence to support design
choices, and ultimately, increases the likelihood that the algorithms afford
transparency.
Related papers
- Analyzing the Effect of $k$-Space Features in MRI Classification Models [0.0]
We have developed an explainable AI methodology tailored for medical imaging.
We employ a Convolutional Neural Network (CNN) that analyzes MRI scans across both image and frequency domains.
This approach not only enhances early training efficiency but also deepens our understanding of how additional features impact the model predictions.
arXiv Detail & Related papers (2024-09-20T15:43:26Z) - SkinGEN: an Explainable Dermatology Diagnosis-to-Generation Framework with Interactive Vision-Language Models [52.90397538472582]
SkinGEN is a diagnosis-to-generation framework that generates reference demonstrations from diagnosis results provided by VLM.
We conduct a user study with 32 participants evaluating both the system performance and explainability.
Results demonstrate that SkinGEN significantly improves users' comprehension of VLM predictions and fosters increased trust in the diagnostic process.
arXiv Detail & Related papers (2024-04-23T05:36:33Z) - The Limits of Perception: Analyzing Inconsistencies in Saliency Maps in XAI [0.0]
Explainable artificial intelligence (XAI) plays an indispensable role in demystifying the decision-making processes of AI.
As they operate as "black boxes," with their reasoning obscured and inaccessible, there's an increased risk of misdiagnosis.
This shift towards transparency is not just beneficial -- it's a critical step towards responsible AI integration in healthcare.
arXiv Detail & Related papers (2024-03-23T02:15:23Z) - Clairvoyance: A Pipeline Toolkit for Medical Time Series [95.22483029602921]
Time-series learning is the bread and butter of data-driven *clinical decision support*
Clairvoyance proposes a unified, end-to-end, autoML-friendly pipeline that serves as a software toolkit.
Clairvoyance is the first to demonstrate viability of a comprehensive and automatable pipeline for clinical time-series ML.
arXiv Detail & Related papers (2023-10-28T12:08:03Z) - Validating polyp and instrument segmentation methods in colonoscopy through Medico 2020 and MedAI 2021 Challenges [58.32937972322058]
"Medico automatic polyp segmentation (Medico 2020)" and "MedAI: Transparency in Medical Image (MedAI 2021)" competitions.
We present a comprehensive summary and analyze each contribution, highlight the strength of the best-performing methods, and discuss the possibility of clinical translations of such methods into the clinic.
arXiv Detail & Related papers (2023-07-30T16:08:45Z) - Detecting Shortcut Learning for Fair Medical AI using Shortcut Testing [62.9062883851246]
Machine learning holds great promise for improving healthcare, but it is critical to ensure that its use will not propagate or amplify health disparities.
One potential driver of algorithmic unfairness, shortcut learning, arises when ML models base predictions on improper correlations in the training data.
Using multi-task learning, we propose the first method to assess and mitigate shortcut learning as a part of the fairness assessment of clinical ML systems.
arXiv Detail & Related papers (2022-07-21T09:35:38Z) - Towards Transparency in Dermatology Image Datasets with Skin Tone
Annotations by Experts, Crowds, and an Algorithm [3.6888633946892044]
Public and private image datasets of dermatological conditions rarely include information on skin color.
As a start towards increasing transparency, AI researchers have appropriated the use of the Fitzpatrick skin type (FST) from a measure of patient photosensitivity to a measure for estimating skin tone.
We show that algorithms based on ITA-FST are not reliable for annotating large-scale image datasets.
arXiv Detail & Related papers (2022-07-06T19:50:39Z) - Improving Interpretability of Deep Neural Networks in Medical Diagnosis
by Investigating the Individual Units [24.761080054980713]
We demonstrate the efficiency of recent attribution techniques to explain the diagnostic decision by visualizing the significant factors in the input image.
Our analysis of unmasking machine intelligence represents the necessity of explainability in the medical diagnostic decision.
arXiv Detail & Related papers (2021-07-19T11:49:31Z) - Privacy-preserving medical image analysis [53.4844489668116]
We present PriMIA, a software framework designed for privacy-preserving machine learning (PPML) in medical imaging.
We show significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets.
We empirically evaluate the framework's security against a gradient-based model inversion attack.
arXiv Detail & Related papers (2020-12-10T13:56:00Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.