Exploring adversarial attacks in federated learning for medical imaging
- URL: http://arxiv.org/abs/2310.06227v1
- Date: Tue, 10 Oct 2023 00:39:58 GMT
- Title: Exploring adversarial attacks in federated learning for medical imaging
- Authors: Erfan Darzi, Florian Dubost, N.M. Sijtsema, P.M.A van Ooijen
- Abstract summary: Federated learning offers a privacy-preserving framework for medical image analysis but exposes the system to adversarial attacks.
This paper aims to evaluate the vulnerabilities of federated learning networks in medical image analysis against such attacks.
- Score: 1.604444445227806
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning offers a privacy-preserving framework for medical image
analysis but exposes the system to adversarial attacks. This paper aims to
evaluate the vulnerabilities of federated learning networks in medical image
analysis against such attacks. Employing domain-specific MRI tumor and
pathology imaging datasets, we assess the effectiveness of known threat
scenarios in a federated learning environment. Our tests reveal that
domain-specific configurations can increase the attacker's success rate
significantly. The findings emphasize the urgent need for effective defense
mechanisms and suggest a critical re-evaluation of current security protocols
in federated medical image analysis systems.
Related papers
- DFT-Based Adversarial Attack Detection in MRI Brain Imaging: Enhancing Diagnostic Accuracy in Alzheimer's Case Studies [0.5249805590164902]
adversarial attacks on medical images can result in misclassifications in disease diagnosis, potentially leading to severe consequences.
In this study, we investigate adversarial attacks on images associated with Alzheimer's disease and propose a defensive method to counteract these attacks.
Our approach utilizes a convolutional neural network (CNN)-based autoencoder architecture in conjunction with the two-dimensional Fourier transform of images for detection purposes.
arXiv Detail & Related papers (2024-08-16T02:18:23Z) - MirrorCheck: Efficient Adversarial Defense for Vision-Language Models [55.73581212134293]
We propose a novel, yet elegantly simple approach for detecting adversarial samples in Vision-Language Models.
Our method leverages Text-to-Image (T2I) models to generate images based on captions produced by target VLMs.
Empirical evaluations conducted on different datasets validate the efficacy of our approach.
arXiv Detail & Related papers (2024-06-13T15:55:04Z) - The Hidden Adversarial Vulnerabilities of Medical Federated Learning [1.604444445227806]
Using gradient information from prior global model updates, adversaries can enhance the efficiency and transferability of their attacks.
Our findings underscore the need to revisit our understanding of AI security in federated healthcare settings.
arXiv Detail & Related papers (2023-10-21T02:21:39Z) - Fed-Safe: Securing Federated Learning in Healthcare Against Adversarial
Attacks [1.2277343096128712]
This paper explores the security aspects of federated learning applications in medical image analysis.
We show that incorporating distributed noise, grounded in the privacy guarantees in federated settings, enables the development of a adversarially robust model.
arXiv Detail & Related papers (2023-10-12T19:33:53Z) - Survey on Adversarial Attack and Defense for Medical Image Analysis: Methods and Challenges [64.63744409431001]
We present a comprehensive survey on advances in adversarial attacks and defenses for medical image analysis.
For a fair comparison, we establish a new benchmark for adversarially robust medical diagnosis models.
arXiv Detail & Related papers (2023-03-24T16:38:58Z) - A Trustworthy Framework for Medical Image Analysis with Deep Learning [71.48204494889505]
TRUDLMIA is a trustworthy deep learning framework for medical image analysis.
It is anticipated that the framework will support researchers and clinicians in advancing the use of deep learning for dealing with public health crises including COVID-19.
arXiv Detail & Related papers (2022-12-06T05:30:22Z) - Towards Trustworthy Healthcare AI: Attention-Based Feature Learning for
COVID-19 Screening With Chest Radiography [70.37371604119826]
Building AI models with trustworthiness is important especially in regulated areas such as healthcare.
Previous work uses convolutional neural networks as the backbone architecture, which has shown to be prone to over-caution and overconfidence in making decisions.
We propose a feature learning approach using Vision Transformers, which use an attention-based mechanism.
arXiv Detail & Related papers (2022-07-19T14:55:42Z) - An Interpretable Multiple-Instance Approach for the Detection of
referable Diabetic Retinopathy from Fundus Images [72.94446225783697]
We propose a machine learning system for the detection of referable Diabetic Retinopathy in fundus images.
By extracting local information from image patches and combining it efficiently through an attention mechanism, our system is able to achieve high classification accuracy.
We evaluate our approach on publicly available retinal image datasets, in which it exhibits near state-of-the-art performance.
arXiv Detail & Related papers (2021-03-02T13:14:15Z) - Privacy-preserving medical image analysis [53.4844489668116]
We present PriMIA, a software framework designed for privacy-preserving machine learning (PPML) in medical imaging.
We show significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets.
We empirically evaluate the framework's security against a gradient-based model inversion attack.
arXiv Detail & Related papers (2020-12-10T13:56:00Z) - Evaluation of Inference Attack Models for Deep Learning on Medical Data [16.128164765752032]
Recently developed inference attack algorithms indicate that images and text records can be reconstructed by malicious parties.
This gives rise to the concern that medical images and electronic health records containing sensitive patient information are vulnerable to these attacks.
This paper aims to attract interest from researchers in the medical deep learning community to this important problem.
arXiv Detail & Related papers (2020-10-31T03:18:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.