Benchmarking Differentially Private Residual Networks for Medical
Imagery
- URL: http://arxiv.org/abs/2005.13099v5
- Date: Sat, 5 Sep 2020 02:25:06 GMT
- Title: Benchmarking Differentially Private Residual Networks for Medical
Imagery
- Authors: Sahib Singh, Harshvardhan Sikka, Sasikanth Kotti, Andrew Trask
- Abstract summary: We compare two robust differential privacy mechanisms: Local-DP and DP-SGD.
We evaluate how useful these theoretical privacy guarantees actually prove to be in the real world medical setting.
- Score: 2.4469484645516837
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper we measure the effectiveness of $\epsilon$-Differential Privacy
(DP) when applied to medical imaging. We compare two robust differential
privacy mechanisms: Local-DP and DP-SGD and benchmark their performance when
analyzing medical imagery records. We analyze the trade-off between the model's
accuracy and the level of privacy it guarantees, and also take a closer look to
evaluate how useful these theoretical privacy guarantees actually prove to be
in the real world medical setting.
Related papers
- On Differentially Private 3D Medical Image Synthesis with Controllable Latent Diffusion Models [5.966954237899151]
This study addresses challenges for 3D cardiac MRI images in the short-axis view.
We propose Latent Diffusion Models that generate synthetic images conditioned on medical attributes.
We finetune our models with differential privacy on the UK Biobank dataset.
arXiv Detail & Related papers (2024-07-23T11:49:58Z) - OpticalDR: A Deep Optical Imaging Model for Privacy-Protective
Depression Recognition [66.91236298878383]
Depression Recognition (DR) poses a considerable challenge, especially in the context of privacy concerns.
We design a new imaging system to erase the identity information of captured facial images while retain disease-relevant features.
It is irreversible for identity information recovery while preserving essential disease-related characteristics necessary for accurate DR.
arXiv Detail & Related papers (2024-02-29T01:20:29Z) - Differentially Private SGD Without Clipping Bias: An Error-Feedback Approach [62.000948039914135]
Using Differentially Private Gradient Descent with Gradient Clipping (DPSGD-GC) to ensure Differential Privacy (DP) comes at the cost of model performance degradation.
We propose a new error-feedback (EF) DP algorithm as an alternative to DPSGD-GC.
We establish an algorithm-specific DP analysis for our proposed algorithm, providing privacy guarantees based on R'enyi DP.
arXiv Detail & Related papers (2023-11-24T17:56:44Z) - Private, fair and accurate: Training large-scale, privacy-preserving AI models in medical imaging [47.99192239793597]
We evaluated the effect of privacy-preserving training of AI models regarding accuracy and fairness compared to non-private training.
Our study shows that -- under the challenging realistic circumstances of a real-life clinical dataset -- the privacy-preserving training of diagnostic deep learning models is possible with excellent diagnostic accuracy and fairness.
arXiv Detail & Related papers (2023-02-03T09:49:13Z) - Towards Reliable Medical Image Segmentation by utilizing Evidential Calibrated Uncertainty [52.03490691733464]
We introduce DEviS, an easily implementable foundational model that seamlessly integrates into various medical image segmentation networks.
By leveraging subjective logic theory, we explicitly model probability and uncertainty for the problem of medical image segmentation.
DeviS incorporates an uncertainty-aware filtering module, which utilizes the metric of uncertainty-calibrated error to filter reliable data.
arXiv Detail & Related papers (2023-01-01T05:02:46Z) - Bridging the Gap: Differentially Private Equivariant Deep Learning for
Medical Image Analysis [7.49320945341034]
We propose to use steerable equivariant convolutional networks for medical image analysis with Differential Privacy (DP)
Their improved feature quality and parameter efficiency yield remarkable accuracy gains, narrowing the privacy-utility gap.
arXiv Detail & Related papers (2022-09-09T14:51:13Z) - DP-Image: Differential Privacy for Image Data in Feature Space [23.593790091283225]
We introduce a novel notion of image-aware differential privacy, referred to as DP-image, that can protect user's personal information in images.
Our results show that the proposed DP-Image method provides excellent DP protection on images, with a controllable distortion to faces.
arXiv Detail & Related papers (2021-03-12T04:02:23Z) - Privacy-preserving medical image analysis [53.4844489668116]
We present PriMIA, a software framework designed for privacy-preserving machine learning (PPML) in medical imaging.
We show significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets.
We empirically evaluate the framework's security against a gradient-based model inversion attack.
arXiv Detail & Related papers (2020-12-10T13:56:00Z) - Cross-Modal Information Maximization for Medical Imaging: CMIM [62.28852442561818]
In hospitals, data are siloed to specific information systems that make the same information available under different modalities.
This offers unique opportunities to obtain and use at train-time those multiple views of the same information that might not always be available at test-time.
We propose an innovative framework that makes the most of available data by learning good representations of a multi-modal input that are resilient to modality dropping at test-time.
arXiv Detail & Related papers (2020-10-20T20:05:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.