The Hidden Adversarial Vulnerabilities of Medical Federated Learning
- URL: http://arxiv.org/abs/2310.13893v1
- Date: Sat, 21 Oct 2023 02:21:39 GMT
- Title: The Hidden Adversarial Vulnerabilities of Medical Federated Learning
- Authors: Erfan Darzi, Florian Dubost, Nanna. M. Sijtsema, P.M.A van Ooijen
- Abstract summary: Using gradient information from prior global model updates, adversaries can enhance the efficiency and transferability of their attacks.
Our findings underscore the need to revisit our understanding of AI security in federated healthcare settings.
- Score: 1.604444445227806
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we delve into the susceptibility of federated medical image
analysis systems to adversarial attacks. Our analysis uncovers a novel
exploitation avenue: using gradient information from prior global model
updates, adversaries can enhance the efficiency and transferability of their
attacks. Specifically, we demonstrate that single-step attacks (e.g. FGSM),
when aptly initialized, can outperform the efficiency of their iterative
counterparts but with reduced computational demand. Our findings underscore the
need to revisit our understanding of AI security in federated healthcare
settings.
Related papers
- Illusions of Relevance: Using Content Injection Attacks to Deceive Retrievers, Rerankers, and LLM Judges [52.96987928118327]
We find that embedding models for retrieval, rerankers, and large language model (LLM) relevance judges are vulnerable to content injection attacks.
We identify two primary threats: (1) inserting unrelated or harmful content within passages that still appear deceptively "relevant", and (2) inserting entire queries or key query terms into passages to boost their perceived relevance.
Our study systematically examines the factors that influence an attack's success, such as the placement of injected content and the balance between relevant and non-relevant material.
arXiv Detail & Related papers (2025-01-30T18:02:15Z) - Ensuring Medical AI Safety: Explainable AI-Driven Detection and Mitigation of Spurious Model Behavior and Associated Data [14.991686165405959]
We introduce a semi-automated framework for the identification of spurious behavior from both data and model perspective.
This allows the retrieval of spurious data points and the detection of model circuits that encode the associated prediction rules.
We show the applicability of our framework using four medical datasets, featuring controlled and real-world spurious correlations.
arXiv Detail & Related papers (2025-01-23T16:39:09Z) - Exploring adversarial attacks in federated learning for medical imaging [1.604444445227806]
Federated learning offers a privacy-preserving framework for medical image analysis but exposes the system to adversarial attacks.
This paper aims to evaluate the vulnerabilities of federated learning networks in medical image analysis against such attacks.
arXiv Detail & Related papers (2023-10-10T00:39:58Z) - On enhancing the robustness of Vision Transformers: Defensive Diffusion [0.0]
ViTs, the SOTA vision model, rely on large amounts of patient data for training.
Adversaries may exploit vulnerabilities in ViTs to extract sensitive patient information and compromising patient privacy.
This work addresses these vulnerabilities to ensure the trustworthiness and reliability of ViTs in medical applications.
arXiv Detail & Related papers (2023-05-14T00:17:33Z) - Probabilistic Categorical Adversarial Attack & Adversarial Training [45.458028977108256]
The existence of adversarial examples brings huge concern for people to apply Deep Neural Networks (DNNs) in safety-critical tasks.
How to generate adversarial examples with categorical data is an important problem but lack of extensive exploration.
We propose Probabilistic Categorical Adversarial Attack (PCAA), which transfers the discrete optimization problem to a continuous problem that can be solved efficiently by Projected Gradient Descent.
arXiv Detail & Related papers (2022-10-17T19:04:16Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [55.38113802311365]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Adversarial Attack Driven Data Augmentation for Accurate And Robust
Medical Image Segmentation [0.0]
We propose a new augmentation method by introducing adversarial learning attack techniques.
We have also introduced the concept of Inverse FGSM, which works in the opposite manner of FGSM for the data augmentation.
The overall analysis of experiments indicates a novel use of adversarial machine learning along with robustness enhancement.
arXiv Detail & Related papers (2021-05-25T17:44:19Z) - Curse or Redemption? How Data Heterogeneity Affects the Robustness of
Federated Learning [51.15273664903583]
Data heterogeneity has been identified as one of the key features in federated learning but often overlooked in the lens of robustness to adversarial attacks.
This paper focuses on characterizing and understanding its impact on backdooring attacks in federated learning through comprehensive experiments using synthetic and the LEAF benchmarks.
arXiv Detail & Related papers (2021-02-01T06:06:21Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.