Bridging the Gap: Differentially Private Equivariant Deep Learning for
Medical Image Analysis
- URL: http://arxiv.org/abs/2209.04338v2
- Date: Tue, 20 Jun 2023 16:38:13 GMT
- Title: Bridging the Gap: Differentially Private Equivariant Deep Learning for
Medical Image Analysis
- Authors: Florian A. H\"olzl, Daniel Rueckert, Georgios Kaissis
- Abstract summary: We propose to use steerable equivariant convolutional networks for medical image analysis with Differential Privacy (DP)
Their improved feature quality and parameter efficiency yield remarkable accuracy gains, narrowing the privacy-utility gap.
- Score: 7.49320945341034
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning with formal privacy-preserving techniques like Differential
Privacy (DP) allows one to derive valuable insights from sensitive medical
imaging data while promising to protect patient privacy, but it usually comes
at a sharp privacy-utility trade-off. In this work, we propose to use steerable
equivariant convolutional networks for medical image analysis with DP. Their
improved feature quality and parameter efficiency yield remarkable accuracy
gains, narrowing the privacy-utility gap.
Related papers
- On Differentially Private 3D Medical Image Synthesis with Controllable Latent Diffusion Models [5.966954237899151]
This study addresses challenges for 3D cardiac MRI images in the short-axis view.
We propose Latent Diffusion Models that generate synthetic images conditioned on medical attributes.
We finetune our models with differential privacy on the UK Biobank dataset.
arXiv Detail & Related papers (2024-07-23T11:49:58Z) - OpticalDR: A Deep Optical Imaging Model for Privacy-Protective
Depression Recognition [66.91236298878383]
Depression Recognition (DR) poses a considerable challenge, especially in the context of privacy concerns.
We design a new imaging system to erase the identity information of captured facial images while retain disease-relevant features.
It is irreversible for identity information recovery while preserving essential disease-related characteristics necessary for accurate DR.
arXiv Detail & Related papers (2024-02-29T01:20:29Z) - Vision Through the Veil: Differential Privacy in Federated Learning for
Medical Image Classification [15.382184404673389]
The proliferation of deep learning applications in healthcare calls for data aggregation across various institutions.
Privacy-preserving mechanisms are paramount in medical image analysis, where the data being sensitive in nature.
This study addresses the need by integrating differential privacy, a leading privacy-preserving technique, into a federated learning framework for medical image classification.
arXiv Detail & Related papers (2023-06-30T16:48:58Z) - Private, fair and accurate: Training large-scale, privacy-preserving AI models in medical imaging [47.99192239793597]
We evaluated the effect of privacy-preserving training of AI models regarding accuracy and fairness compared to non-private training.
Our study shows that -- under the challenging realistic circumstances of a real-life clinical dataset -- the privacy-preserving training of diagnostic deep learning models is possible with excellent diagnostic accuracy and fairness.
arXiv Detail & Related papers (2023-02-03T09:49:13Z) - Partial sensitivity analysis in differential privacy [58.730520380312676]
We investigate the impact of each input feature on the individual's privacy loss.
We experimentally evaluate our approach on queries over private databases.
We also explore our findings in the context of neural network training on synthetic data.
arXiv Detail & Related papers (2021-09-22T08:29:16Z) - NeuralDP Differentially private neural networks by design [61.675604648670095]
We propose NeuralDP, a technique for privatising activations of some layer within a neural network.
We experimentally demonstrate on two datasets that our method offers substantially improved privacy-utility trade-offs compared to DP-SGD.
arXiv Detail & Related papers (2021-07-30T12:40:19Z) - Towards Privacy-preserving Explanations in Medical Image Analysis [0.0]
The PPRL-VGAN deep learning method was the best at preserving the disease-related semantic features while guaranteeing a high level of privacy.
We emphasize the need to improve privacy-preserving methods for medical imaging.
arXiv Detail & Related papers (2021-07-20T17:35:36Z) - Privacy-preserving medical image analysis [53.4844489668116]
We present PriMIA, a software framework designed for privacy-preserving machine learning (PPML) in medical imaging.
We show significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets.
We empirically evaluate the framework's security against a gradient-based model inversion attack.
arXiv Detail & Related papers (2020-12-10T13:56:00Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - Benchmarking Differentially Private Residual Networks for Medical
Imagery [2.4469484645516837]
We compare two robust differential privacy mechanisms: Local-DP and DP-SGD.
We evaluate how useful these theoretical privacy guarantees actually prove to be in the real world medical setting.
arXiv Detail & Related papers (2020-05-27T00:29:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.