Evaluation of Inference Attack Models for Deep Learning on Medical Data
- URL: http://arxiv.org/abs/2011.00177v1
- Date: Sat, 31 Oct 2020 03:18:36 GMT
- Title: Evaluation of Inference Attack Models for Deep Learning on Medical Data
- Authors: Maoqiang Wu, Xinyue Zhang, Jiahao Ding, Hien Nguyen, Rong Yu, Miao
Pan, Stephen T. Wong
- Abstract summary: Recently developed inference attack algorithms indicate that images and text records can be reconstructed by malicious parties.
This gives rise to the concern that medical images and electronic health records containing sensitive patient information are vulnerable to these attacks.
This paper aims to attract interest from researchers in the medical deep learning community to this important problem.
- Score: 16.128164765752032
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has attracted broad interest in healthcare and medical
communities. However, there has been little research into the privacy issues
created by deep networks trained for medical applications. Recently developed
inference attack algorithms indicate that images and text records can be
reconstructed by malicious parties that have the ability to query deep
networks. This gives rise to the concern that medical images and electronic
health records containing sensitive patient information are vulnerable to these
attacks. This paper aims to attract interest from researchers in the medical
deep learning community to this important problem. We evaluate two prominent
inference attack models, namely, attribute inference attack and model inversion
attack. We show that they can reconstruct real-world medical images and
clinical reports with high fidelity. We then investigate how to protect
patients' privacy using defense mechanisms, such as label perturbation and
model perturbation. We provide a comparison of attack results between the
original and the medical deep learning models with defenses. The experimental
evaluations show that our proposed defense approaches can effectively reduce
the potential privacy leakage of medical deep learning from the inference
attacks.
Related papers
- Remembering Everything Makes You Vulnerable: A Limelight on Machine Unlearning for Personalized Healthcare Sector [0.873811641236639]
This thesis aims to address the vulnerability of personalized healthcare models, particularly in the context of ECG monitoring.
We propose an approach termed "Machine Unlearning" to mitigate the impact of exposed data points on machine learning models.
arXiv Detail & Related papers (2024-07-05T15:38:36Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - How Deep Learning Sees the World: A Survey on Adversarial Attacks &
Defenses [0.0]
This paper compiles the most recent adversarial attacks, grouped by the attacker capacity, and modern defenses clustered by protection strategies.
We also present the new advances regarding Vision Transformers, summarize the datasets and metrics used in the context of adversarial settings, and compare the state-of-the-art results under different attacks, finishing with the identification of open issues.
arXiv Detail & Related papers (2023-05-18T10:33:28Z) - Adversarial Attack and Defense for Medical Image Analysis: Methods and
Applications [57.206139366029646]
We present a comprehensive survey on advances in adversarial attack and defense for medical image analysis.
We provide a unified theoretical framework for different types of adversarial attack and defense methods for medical image analysis.
For a fair comparison, we establish a new benchmark for adversarially robust medical diagnosis models.
arXiv Detail & Related papers (2023-03-24T16:38:58Z) - How Does a Deep Learning Model Architecture Impact Its Privacy? A
Comprehensive Study of Privacy Attacks on CNNs and Transformers [18.27174440444256]
Privacy concerns arise due to the potential leakage of sensitive information from the training data.
Recent research has revealed that deep learning models are vulnerable to various privacy attacks.
arXiv Detail & Related papers (2022-10-20T06:44:37Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - Homomorphic Encryption and Federated Learning based Privacy-Preserving
CNN Training: COVID-19 Detection Use-Case [0.41998444721319217]
This paper proposes a privacy-preserving federated learning algorithm for medical data using homomorphic encryption.
The proposed algorithm uses a secure multi-party computation protocol to protect the deep learning model from the adversaries.
arXiv Detail & Related papers (2022-04-16T08:38:35Z) - Privacy-aware Early Detection of COVID-19 through Adversarial Training [8.722475705906206]
Early detection of COVID-19 is an ongoing area of research that can help with triage, monitoring and general health assessment of potential patients.
Different machine learning techniques have been used in the literature to detect coronavirus using routine clinical data.
Data breaches and information leakage when using these models can bring reputational damage and cause legal issues for hospitals.
arXiv Detail & Related papers (2022-01-09T13:08:11Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z) - Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News [57.9843300852526]
We introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions.
To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles.
In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies.
arXiv Detail & Related papers (2020-09-16T14:13:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.