Towards Case-based Interpretability for Medical Federated Learning
- URL: http://arxiv.org/abs/2408.13626v1
- Date: Sat, 24 Aug 2024 16:42:12 GMT
- Title: Towards Case-based Interpretability for Medical Federated Learning
- Authors: Laura Latorre, Liliana Petrychenko, Regina Beets-Tan, Taisiya Kopytova, Wilson Silva,
- Abstract summary: We explore deep generative models to generate case-based explanations in a medical federated learning setting.
Our proof-of-concept focuses on pleural effusion diagnosis and uses publicly available Chest X-ray data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We explore deep generative models to generate case-based explanations in a medical federated learning setting. Explaining AI model decisions through case-based interpretability is paramount to increasing trust and allowing widespread adoption of AI in clinical practice. However, medical AI training paradigms are shifting towards federated learning settings in order to comply with data protection regulations. In a federated scenario, past data is inaccessible to the current user. Thus, we use a deep generative model to generate synthetic examples that protect privacy and explain decisions. Our proof-of-concept focuses on pleural effusion diagnosis and uses publicly available Chest X-ray data.
Related papers
- Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - Context-dependent Explainability and Contestability for Trustworthy
Medical Artificial Intelligence: Misclassification Identification of
Morbidity Recognition Models in Preterm Infants [0.0]
Explainable AI (XAI) aims to address this requirement by clarifying AI reasoning to support the end users.
We built our methodology on three main pillars: decomposing the feature set by leveraging clinical context latent space, assessing the clinical association of global explanations, and Latent Space Similarity (LSS) based local explanations.
arXiv Detail & Related papers (2022-12-17T07:59:09Z) - When Accuracy Meets Privacy: Two-Stage Federated Transfer Learning
Framework in Classification of Medical Images on Limited Data: A COVID-19
Case Study [77.34726150561087]
COVID-19 pandemic has spread rapidly and caused a shortage of global medical resources.
CNN has been widely utilized and verified in analyzing medical images.
arXiv Detail & Related papers (2022-03-24T02:09:41Z) - Review of Disentanglement Approaches for Medical Applications -- Towards
Solving the Gordian Knot of Generative Models in Healthcare [3.5586630313792513]
We give a comprehensive overview of popular generative models, like Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs) and Flow-based Models.
After introducing the theoretical frameworks, we give an overview of recent medical applications and discuss the impact and importance of disentanglement approaches for medical applications.
arXiv Detail & Related papers (2022-03-21T17:06:22Z) - Application of Homomorphic Encryption in Medical Imaging [60.51436886110803]
We show how HE can be used to make predictions over medical images while preventing unauthorized secondary use of data.
We report some experiments using 3D chest CT-Scans for a nodule detection task.
arXiv Detail & Related papers (2021-10-12T19:57:12Z) - Leveraging Clinical Context for User-Centered Explainability: A Diabetes
Use Case [4.520155732176645]
We implement a proof-of-concept (POC) in type-2 diabetes (T2DM) use case where we assess the risk of chronic kidney disease (CKD)
Within the POC, we include risk prediction models for CKD, post-hoc explainers of the predictions, and other natural-language modules.
Our POC approach covers multiple knowledge sources and clinical scenarios, blends knowledge to explain data and predictions to PCPs, and received an enthusiastic response from our medical expert.
arXiv Detail & Related papers (2021-07-06T02:44:40Z) - FedDPGAN: Federated Differentially Private Generative Adversarial
Networks Framework for the Detection of COVID-19 Pneumonia [11.835113185061147]
We propose a FederatedDifferentially Private Generative Adversarial Network (FedDPGAN) to detect COVID-19 pneumonia.
The evaluation of the proposed model is on three types of chest X-ray (CXR) images dataset (COVID-19, normal, and normal pneumonia)
arXiv Detail & Related papers (2021-04-26T13:52:12Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Privacy-preserving medical image analysis [53.4844489668116]
We present PriMIA, a software framework designed for privacy-preserving machine learning (PPML) in medical imaging.
We show significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets.
We empirically evaluate the framework's security against a gradient-based model inversion attack.
arXiv Detail & Related papers (2020-12-10T13:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.