Evaluation of Inference Attack Models for Deep Learning on Medical Data
- URL: http://arxiv.org/abs/2011.00177v1
- Date: Sat, 31 Oct 2020 03:18:36 GMT
- Title: Evaluation of Inference Attack Models for Deep Learning on Medical Data
- Authors: Maoqiang Wu, Xinyue Zhang, Jiahao Ding, Hien Nguyen, Rong Yu, Miao
Pan, Stephen T. Wong
- Abstract summary: Recently developed inference attack algorithms indicate that images and text records can be reconstructed by malicious parties.
This gives rise to the concern that medical images and electronic health records containing sensitive patient information are vulnerable to these attacks.
This paper aims to attract interest from researchers in the medical deep learning community to this important problem.
- Score: 16.128164765752032
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has attracted broad interest in healthcare and medical
communities. However, there has been little research into the privacy issues
created by deep networks trained for medical applications. Recently developed
inference attack algorithms indicate that images and text records can be
reconstructed by malicious parties that have the ability to query deep
networks. This gives rise to the concern that medical images and electronic
health records containing sensitive patient information are vulnerable to these
attacks. This paper aims to attract interest from researchers in the medical
deep learning community to this important problem. We evaluate two prominent
inference attack models, namely, attribute inference attack and model inversion
attack. We show that they can reconstruct real-world medical images and
clinical reports with high fidelity. We then investigate how to protect
patients' privacy using defense mechanisms, such as label perturbation and
model perturbation. We provide a comparison of attack results between the
original and the medical deep learning models with defenses. The experimental
evaluations show that our proposed defense approaches can effectively reduce
the potential privacy leakage of medical deep learning from the inference
attacks.
Related papers
- Model Inversion Attacks: A Survey of Approaches and Countermeasures [59.986922963781]
Recently, a new type of privacy attack, the model inversion attacks (MIAs), aims to extract sensitive features of private data for training.
Despite the significance, there is a lack of systematic studies that provide a comprehensive overview and deeper insights into MIAs.
This survey aims to summarize up-to-date MIA methods in both attacks and defenses.
arXiv Detail & Related papers (2024-11-15T08:09:28Z) - In-depth Analysis of Privacy Threats in Federated Learning for Medical Data [2.6986500640871482]
Federated learning is emerging as a promising machine learning technique in the medical field for analyzing medical images.
Recent studies have revealed that the default settings of federated learning may inadvertently expose private training data to privacy attacks.
We make three original contributions to privacy risk analysis and mitigation in federated learning for medical data.
arXiv Detail & Related papers (2024-09-27T16:45:35Z) - DFT-Based Adversarial Attack Detection in MRI Brain Imaging: Enhancing Diagnostic Accuracy in Alzheimer's Case Studies [0.5249805590164902]
adversarial attacks on medical images can result in misclassifications in disease diagnosis, potentially leading to severe consequences.
In this study, we investigate adversarial attacks on images associated with Alzheimer's disease and propose a defensive method to counteract these attacks.
Our approach utilizes a convolutional neural network (CNN)-based autoencoder architecture in conjunction with the two-dimensional Fourier transform of images for detection purposes.
arXiv Detail & Related papers (2024-08-16T02:18:23Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - How Deep Learning Sees the World: A Survey on Adversarial Attacks &
Defenses [0.0]
This paper compiles the most recent adversarial attacks, grouped by the attacker capacity, and modern defenses clustered by protection strategies.
We also present the new advances regarding Vision Transformers, summarize the datasets and metrics used in the context of adversarial settings, and compare the state-of-the-art results under different attacks, finishing with the identification of open issues.
arXiv Detail & Related papers (2023-05-18T10:33:28Z) - Survey on Adversarial Attack and Defense for Medical Image Analysis: Methods and Challenges [64.63744409431001]
We present a comprehensive survey on advances in adversarial attacks and defenses for medical image analysis.
For a fair comparison, we establish a new benchmark for adversarially robust medical diagnosis models.
arXiv Detail & Related papers (2023-03-24T16:38:58Z) - Homomorphic Encryption and Federated Learning based Privacy-Preserving
CNN Training: COVID-19 Detection Use-Case [0.41998444721319217]
This paper proposes a privacy-preserving federated learning algorithm for medical data using homomorphic encryption.
The proposed algorithm uses a secure multi-party computation protocol to protect the deep learning model from the adversaries.
arXiv Detail & Related papers (2022-04-16T08:38:35Z) - Privacy-aware Early Detection of COVID-19 through Adversarial Training [8.722475705906206]
Early detection of COVID-19 is an ongoing area of research that can help with triage, monitoring and general health assessment of potential patients.
Different machine learning techniques have been used in the literature to detect coronavirus using routine clinical data.
Data breaches and information leakage when using these models can bring reputational damage and cause legal issues for hospitals.
arXiv Detail & Related papers (2022-01-09T13:08:11Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z) - Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News [57.9843300852526]
We introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions.
To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles.
In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies.
arXiv Detail & Related papers (2020-09-16T14:13:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.