Towards Privacy-preserving Explanations in Medical Image Analysis
- URL: http://arxiv.org/abs/2107.09652v1
- Date: Tue, 20 Jul 2021 17:35:36 GMT
- Title: Towards Privacy-preserving Explanations in Medical Image Analysis
- Authors: H. Montenegro, W. Silva, J. S. Cardoso
- Abstract summary: The PPRL-VGAN deep learning method was the best at preserving the disease-related semantic features while guaranteeing a high level of privacy.
We emphasize the need to improve privacy-preserving methods for medical imaging.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The use of Deep Learning in the medical field is hindered by the lack of
interpretability. Case-based interpretability strategies can provide intuitive
explanations for deep learning models' decisions, thus, enhancing trust.
However, the resulting explanations threaten patient privacy, motivating the
development of privacy-preserving methods compatible with the specifics of
medical data. In this work, we analyze existing privacy-preserving methods and
their respective capacity to anonymize medical data while preserving
disease-related semantic features. We find that the PPRL-VGAN deep learning
method was the best at preserving the disease-related semantic features while
guaranteeing a high level of privacy among the compared state-of-the-art
methods. Nevertheless, we emphasize the need to improve privacy-preserving
methods for medical imaging, as we identified relevant drawbacks in all
existing privacy-preserving approaches.
Related papers
- Differential Privacy-Driven Framework for Enhancing Heart Disease Prediction [7.473832609768354]
Machine learning is critical in healthcare, supporting personalized treatment, early disease detection, predictive analytics, image interpretation, drug discovery, efficient operations, and patient monitoring.
In this paper, we utilize machine learning methodologies, including differential privacy and federated learning, to develop privacy-preserving models.
Our results show that using a federated learning model with differential privacy achieved a test accuracy of 85%, ensuring patient data remained secure and private throughout the process.
arXiv Detail & Related papers (2025-04-25T01:27:40Z) - Towards Privacy-aware Mental Health AI Models: Advances, Challenges, and Opportunities [61.633126163190724]
Mental illness is a widespread and debilitating condition with substantial societal and personal costs.
Recent advances in Artificial Intelligence (AI) hold great potential for recognizing and addressing conditions such as depression, anxiety disorder, bipolar disorder, schizophrenia, and post-traumatic stress disorder.
Privacy concerns, including the risk of sensitive data leakage from datasets and trained models, remain a critical barrier to deploying these AI systems in real-world clinical settings.
arXiv Detail & Related papers (2025-02-01T15:10:02Z) - FedDP: Privacy-preserving method based on federated learning for histopathology image segmentation [2.864354559973703]
This paper addresses the dispersed nature and privacy sensitivity of medical image data by employing a federated learning framework.
The proposed method, FedDP, minimally impacts model accuracy while effectively safeguarding the privacy of cancer pathology image data.
arXiv Detail & Related papers (2024-11-07T08:02:58Z) - Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - In-depth Analysis of Privacy Threats in Federated Learning for Medical Data [2.6986500640871482]
Federated learning is emerging as a promising machine learning technique in the medical field for analyzing medical images.
Recent studies have revealed that the default settings of federated learning may inadvertently expose private training data to privacy attacks.
We make three original contributions to privacy risk analysis and mitigation in federated learning for medical data.
arXiv Detail & Related papers (2024-09-27T16:45:35Z) - Collection, usage and privacy of mobility data in the enterprise and public administrations [55.2480439325792]
Security measures such as anonymization are needed to protect individuals' privacy.
Within our study, we conducted expert interviews to gain insights into practices in the field.
We survey privacy-enhancing methods in use, which generally do not comply with state-of-the-art standards of differential privacy.
arXiv Detail & Related papers (2024-07-04T08:29:27Z) - Safe and Interpretable Estimation of Optimal Treatment Regimes [54.257304443780434]
We operationalize a safe and interpretable framework to identify optimal treatment regimes.
Our findings support personalized treatment strategies based on a patient's medical history and pharmacological features.
arXiv Detail & Related papers (2023-10-23T19:59:10Z) - Vision Through the Veil: Differential Privacy in Federated Learning for
Medical Image Classification [15.382184404673389]
The proliferation of deep learning applications in healthcare calls for data aggregation across various institutions.
Privacy-preserving mechanisms are paramount in medical image analysis, where the data being sensitive in nature.
This study addresses the need by integrating differential privacy, a leading privacy-preserving technique, into a federated learning framework for medical image classification.
arXiv Detail & Related papers (2023-06-30T16:48:58Z) - Private, fair and accurate: Training large-scale, privacy-preserving AI models in medical imaging [47.99192239793597]
We evaluated the effect of privacy-preserving training of AI models regarding accuracy and fairness compared to non-private training.
Our study shows that -- under the challenging realistic circumstances of a real-life clinical dataset -- the privacy-preserving training of diagnostic deep learning models is possible with excellent diagnostic accuracy and fairness.
arXiv Detail & Related papers (2023-02-03T09:49:13Z) - Bridging the Gap: Differentially Private Equivariant Deep Learning for
Medical Image Analysis [7.49320945341034]
We propose to use steerable equivariant convolutional networks for medical image analysis with Differential Privacy (DP)
Their improved feature quality and parameter efficiency yield remarkable accuracy gains, narrowing the privacy-utility gap.
arXiv Detail & Related papers (2022-09-09T14:51:13Z) - Semantics-Preserved Distortion for Personal Privacy Protection in Information Management [65.08939490413037]
This paper suggests a linguistically-grounded approach to distort texts while maintaining semantic integrity.
We present two distinct frameworks for semantic-preserving distortion: a generative approach and a substitutive approach.
We also explore privacy protection in a specific medical information management scenario, showing our method effectively limits sensitive data memorization.
arXiv Detail & Related papers (2022-01-04T04:01:05Z) - Privacy-preserving medical image analysis [53.4844489668116]
We present PriMIA, a software framework designed for privacy-preserving machine learning (PPML) in medical imaging.
We show significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets.
We empirically evaluate the framework's security against a gradient-based model inversion attack.
arXiv Detail & Related papers (2020-12-10T13:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.