Which Client is Reliable?: A Reliable and Personalized Prompt-based Federated Learning for Medical Image Question Answering
- URL: http://arxiv.org/abs/2410.17484v1
- Date: Wed, 23 Oct 2024 00:31:17 GMT
- Title: Which Client is Reliable?: A Reliable and Personalized Prompt-based Federated Learning for Medical Image Question Answering
- Authors: He Zhu, Ren Togo, Takahiro Ogawa, Miki Haseyama,
- Abstract summary: We present a novel personalized federated learning (pFL) method for medical visual question answering (VQA) models.
Our method introduces learnable prompts into a Transformer architecture to efficiently train it on diverse medical datasets without massive computational costs.
- Score: 51.26412822853409
- License:
- Abstract: Conventional medical artificial intelligence (AI) models face barriers in clinical application and ethical issues owing to their inability to handle the privacy-sensitive characteristics of medical data. We present a novel personalized federated learning (pFL) method for medical visual question answering (VQA) models, addressing privacy reliability challenges in the medical domain. Our method introduces learnable prompts into a Transformer architecture to efficiently train it on diverse medical datasets without massive computational costs. Then we introduce a reliable client VQA model that incorporates Dempster-Shafer evidence theory to quantify uncertainty in predictions, enhancing the model's reliability. Furthermore, we propose a novel inter-client communication mechanism that uses maximum likelihood estimation to balance accuracy and uncertainty, fostering efficient integration of insights across clients.
Related papers
- Future-Proofing Medical Imaging with Privacy-Preserving Federated Learning and Uncertainty Quantification: A Review [14.88874727211064]
AI could soon become routine in clinical practice for disease diagnosis, prognosis, treatment planning, and post-treatment surveillance.
Privacy concerns surrounding patient data present a major barrier to the widespread adoption of AI in medical imaging.
Federated Learning (FL) offers a solution that enables organizations to train AI models collaboratively without sharing sensitive data.
arXiv Detail & Related papers (2024-09-24T16:55:32Z) - Would You Trust an AI Doctor? Building Reliable Medical Predictions with Kernel Dropout Uncertainty [14.672477787408887]
We introduce a Bayesian Monte Carlo Dropout model with kernel modelling to enhance reliability on small medical datasets.
We demonstrate significant improvements in reliability, even with limited data, offering a promising step towards building trust in AI-driven medical predictions.
arXiv Detail & Related papers (2024-04-16T11:43:26Z) - Unified Uncertainty Estimation for Cognitive Diagnosis Models [70.46998436898205]
We propose a unified uncertainty estimation approach for a wide range of cognitive diagnosis models.
We decompose the uncertainty of diagnostic parameters into data aspect and model aspect.
Our method is effective and can provide useful insights into the uncertainty of cognitive diagnosis.
arXiv Detail & Related papers (2024-03-09T13:48:20Z) - Prompt-based Personalized Federated Learning for Medical Visual Question
Answering [56.002377299811656]
We present a novel prompt-based personalized federated learning (pFL) method to address data heterogeneity and privacy concerns.
We regard medical datasets from different organs as clients and use pFL to train personalized transformer-based VQA models for each client.
arXiv Detail & Related papers (2024-02-15T03:09:54Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Explainable AI for clinical risk prediction: a survey of concepts,
methods, and modalities [2.9404725327650767]
Review of progress in developing explainable models for clinical risk prediction.
emphasizes the need for external validation and the combination of diverse interpretability methods.
End-to-end approach to explainability in clinical risk prediction is essential for success.
arXiv Detail & Related papers (2023-08-16T14:51:51Z) - Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration [55.88616573143478]
Outcome prediction from clinical text can prevent doctors from overlooking possible risks.
Diagnoses at discharge, procedures performed, in-hospital mortality and length-of-stay prediction are four common outcome prediction targets.
We propose clinical outcome pre-training to integrate knowledge about patient outcomes from multiple public sources.
arXiv Detail & Related papers (2021-02-08T10:26:44Z) - Privacy-preserving medical image analysis [53.4844489668116]
We present PriMIA, a software framework designed for privacy-preserving machine learning (PPML) in medical imaging.
We show significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets.
We empirically evaluate the framework's security against a gradient-based model inversion attack.
arXiv Detail & Related papers (2020-12-10T13:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.