Membership Inference Attacks on Deep Regression Models for Neuroimaging
- URL: http://arxiv.org/abs/2105.02866v1
- Date: Thu, 6 May 2021 17:51:06 GMT
- Title: Membership Inference Attacks on Deep Regression Models for Neuroimaging
- Authors: Umang Gupta, Dmitris Stripelis, Pradeep K. Lam, Paul M. Thompson,
Jos\'e Luis Ambite, Greg Ver Steeg
- Abstract summary: We show realistic Membership Inference attacks on deep learning models trained for 3D neuroimaging tasks in a centralized as well as decentralized setup.
We correctly identified whether an MRI scan was used in model training with a 60% to over 80% success rate depending on model complexity and security assumptions.
- Score: 15.591129844038269
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ensuring the privacy of research participants is vital, even more so in
healthcare environments. Deep learning approaches to neuroimaging require large
datasets, and this often necessitates sharing data between multiple sites,
which is antithetical to the privacy objectives. Federated learning is a
commonly proposed solution to this problem. It circumvents the need for data
sharing by sharing parameters during the training process. However, we
demonstrate that allowing access to parameters may leak private information
even if data is never directly shared. In particular, we show that it is
possible to infer if a sample was used to train the model given only access to
the model prediction (black-box) or access to the model itself (white-box) and
some leaked samples from the training data distribution. Such attacks are
commonly referred to as Membership Inference attacks. We show realistic
Membership Inference attacks on deep learning models trained for 3D
neuroimaging tasks in a centralized as well as decentralized setup. We
demonstrate feasible attacks on brain age prediction models (deep learning
models that predict a person's age from their brain MRI scan). We correctly
identified whether an MRI scan was used in model training with a 60% to over
80% success rate depending on model complexity and security assumptions.
Related papers
- Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Security and Privacy Challenges in Deep Learning Models [0.0]
Deep learning models can be subjected to various attacks that compromise model security and data privacy.
Model Extraction Attacks, Model Inversion attacks, and Adversarial attacks are discussed.
Data Poisoning Attacks add harmful data to the training set, disrupting the learning process and reducing the reliability of the deep learning mode.
arXiv Detail & Related papers (2023-11-23T00:26:14Z) - Membership Inference Attacks against Synthetic Data through Overfitting
Detection [84.02632160692995]
We argue for a realistic MIA setting that assumes the attacker has some knowledge of the underlying data distribution.
We propose DOMIAS, a density-based MIA model that aims to infer membership by targeting local overfitting of the generative model.
arXiv Detail & Related papers (2023-02-24T11:27:39Z) - Synthetic Model Combination: An Instance-wise Approach to Unsupervised
Ensemble Learning [92.89846887298852]
Consider making a prediction over new test data without any opportunity to learn from a training set of labelled data.
Give access to a set of expert models and their predictions alongside some limited information about the dataset used to train them.
arXiv Detail & Related papers (2022-10-11T10:20:31Z) - Does CLIP Know My Face? [31.21910897081894]
We introduce a novel method to assess privacy for multi-modal models, specifically vision-language models like CLIP.
The proposed Identity Inference Attack (IDIA) reveals whether an individual was included in the training data by querying the model with images of the same person.
Our results highlight the need for stronger privacy protection in large-scale models and suggest that IDIAs can be used to prove the unauthorized use of data for training and to enforce privacy laws.
arXiv Detail & Related papers (2022-09-15T14:48:50Z) - Practical Challenges in Differentially-Private Federated Survival
Analysis of Medical Data [57.19441629270029]
In this paper, we take advantage of the inherent properties of neural networks to federate the process of training of survival analysis models.
In the realistic setting of small medical datasets and only a few data centers, this noise makes it harder for the models to converge.
We propose DPFed-post which adds a post-processing stage to the private federated learning scheme.
arXiv Detail & Related papers (2022-02-08T10:03:24Z) - Decentralized Federated Learning Preserves Model and Data Privacy [77.454688257702]
We propose a fully decentralized approach, which allows to share knowledge between trained models.
Students are trained on the output of their teachers via synthetically generated input data.
The results show that an untrained student model, trained on the teachers output reaches comparable F1-scores as the teacher.
arXiv Detail & Related papers (2021-02-01T14:38:54Z) - Practical No-box Adversarial Attacks against DNNs [31.808770437120536]
We investigate no-box adversarial examples, where the attacker can neither access the model information or the training set nor query the model.
We propose three mechanisms for training with a very small dataset and find that prototypical reconstruction is the most effective.
Our approach significantly diminishes the average prediction accuracy of the system to only 15.40%, which is on par with the attack that transfers adversarial examples from a pre-trained Arcface model.
arXiv Detail & Related papers (2020-12-04T11:10:03Z) - Knowledge-Enriched Distributional Model Inversion Attacks [49.43828150561947]
Model inversion (MI) attacks are aimed at reconstructing training data from model parameters.
We present a novel inversion-specific GAN that can better distill knowledge useful for performing attacks on private models from public data.
Our experiments show that the combination of these techniques can significantly boost the success rate of the state-of-the-art MI attacks by 150%.
arXiv Detail & Related papers (2020-10-08T16:20:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.