The Hidden Adversarial Vulnerabilities of Medical Federated Learning
- URL: http://arxiv.org/abs/2310.13893v1
- Date: Sat, 21 Oct 2023 02:21:39 GMT
- Title: The Hidden Adversarial Vulnerabilities of Medical Federated Learning
- Authors: Erfan Darzi, Florian Dubost, Nanna. M. Sijtsema, P.M.A van Ooijen
- Abstract summary: Using gradient information from prior global model updates, adversaries can enhance the efficiency and transferability of their attacks.
Our findings underscore the need to revisit our understanding of AI security in federated healthcare settings.
- Score: 1.604444445227806
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we delve into the susceptibility of federated medical image
analysis systems to adversarial attacks. Our analysis uncovers a novel
exploitation avenue: using gradient information from prior global model
updates, adversaries can enhance the efficiency and transferability of their
attacks. Specifically, we demonstrate that single-step attacks (e.g. FGSM),
when aptly initialized, can outperform the efficiency of their iterative
counterparts but with reduced computational demand. Our findings underscore the
need to revisit our understanding of AI security in federated healthcare
settings.
Related papers
- Securing the Diagnosis of Medical Imaging: An In-depth Analysis of AI-Resistant Attacks [0.0]
It's common knowledge that attackers might cause misclassification by deliberately creating inputs for machine learning classifiers.
Recent arguments have suggested that adversarial attacks could be made against medical image analysis technologies.
It is essential to assess how strong medical DNN tasks are against adversarial attacks.
arXiv Detail & Related papers (2024-08-01T07:37:27Z) - Exploring adversarial attacks in federated learning for medical imaging [1.604444445227806]
Federated learning offers a privacy-preserving framework for medical image analysis but exposes the system to adversarial attacks.
This paper aims to evaluate the vulnerabilities of federated learning networks in medical image analysis against such attacks.
arXiv Detail & Related papers (2023-10-10T00:39:58Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - On enhancing the robustness of Vision Transformers: Defensive Diffusion [0.0]
ViTs, the SOTA vision model, rely on large amounts of patient data for training.
Adversaries may exploit vulnerabilities in ViTs to extract sensitive patient information and compromising patient privacy.
This work addresses these vulnerabilities to ensure the trustworthiness and reliability of ViTs in medical applications.
arXiv Detail & Related papers (2023-05-14T00:17:33Z) - Probabilistic Categorical Adversarial Attack & Adversarial Training [45.458028977108256]
The existence of adversarial examples brings huge concern for people to apply Deep Neural Networks (DNNs) in safety-critical tasks.
How to generate adversarial examples with categorical data is an important problem but lack of extensive exploration.
We propose Probabilistic Categorical Adversarial Attack (PCAA), which transfers the discrete optimization problem to a continuous problem that can be solved efficiently by Projected Gradient Descent.
arXiv Detail & Related papers (2022-10-17T19:04:16Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [55.38113802311365]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Adversarial Attack Driven Data Augmentation for Accurate And Robust
Medical Image Segmentation [0.0]
We propose a new augmentation method by introducing adversarial learning attack techniques.
We have also introduced the concept of Inverse FGSM, which works in the opposite manner of FGSM for the data augmentation.
The overall analysis of experiments indicates a novel use of adversarial machine learning along with robustness enhancement.
arXiv Detail & Related papers (2021-05-25T17:44:19Z) - Curse or Redemption? How Data Heterogeneity Affects the Robustness of
Federated Learning [51.15273664903583]
Data heterogeneity has been identified as one of the key features in federated learning but often overlooked in the lens of robustness to adversarial attacks.
This paper focuses on characterizing and understanding its impact on backdooring attacks in federated learning through comprehensive experiments using synthetic and the LEAF benchmarks.
arXiv Detail & Related papers (2021-02-01T06:06:21Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.