MedFedPure: A Medical Federated Framework with MAE-based Detection and Diffusion Purification for Inference-Time Attacks
- URL: http://arxiv.org/abs/2511.11625v1
- Date: Fri, 07 Nov 2025 08:48:03 GMT
- Title: MedFedPure: A Medical Federated Framework with MAE-based Detection and Diffusion Purification for Inference-Time Attacks
- Authors: Mohammad Karami, Mohammad Reza Nemati, Aidin Kazemi, Ali Mikaeili Barzili, Hamid Azadegan, Behzad Moshiri,
- Abstract summary: Adversarial attacks can subtly alter medical scans in ways invisible to the human eye yet powerful enough to mislead AI models.<n>We present MedFedPure, a personalized federated learning defense framework designed to protect diagnostic AI models at inference time without compromising privacy or accuracy.
- Score: 1.3069778058355659
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial intelligence (AI) has shown great potential in medical imaging, particularly for brain tumor detection using Magnetic Resonance Imaging (MRI). However, the models remain vulnerable at inference time when they are trained collaboratively through Federated Learning (FL), an approach adopted to protect patient privacy. Adversarial attacks can subtly alter medical scans in ways invisible to the human eye yet powerful enough to mislead AI models, potentially causing serious misdiagnoses. Existing defenses often assume centralized data and struggle to cope with the decentralized and diverse nature of federated medical settings. In this work, we present MedFedPure, a personalized federated learning defense framework designed to protect diagnostic AI models at inference time without compromising privacy or accuracy. MedFedPure combines three key elements: (1) a personalized FL model that adapts to the unique data distribution of each institution; (2) a Masked Autoencoder (MAE) that detects suspicious inputs by exposing hidden perturbations; and (3) an adaptive diffusion-based purification module that selectively cleans only the flagged scans before classification. Together, these steps offer robust protection while preserving the integrity of normal, benign images. We evaluated MedFedPure on the Br35H brain MRI dataset. The results show a significant gain in adversarial robustness, improving performance from 49.50% to 87.33% under strong attacks, while maintaining a high clean accuracy of 97.67%. By operating locally and in real time during diagnosis, our framework provides a practical path to deploying secure, trustworthy, and privacy-preserving AI tools in clinical workflows. Index Terms: cancer, tumor detection, federated learning, masked autoencoder, diffusion, privacy
Related papers
- A Novel Approach to Breast Cancer Segmentation using U-Net Model with Attention Mechanisms and FedProx [0.0]
Breast cancer is a leading cause of death among women worldwide, emphasizing the need for early detection and accurate diagnosis.<n>The sensitive nature of medical data makes it challenging to develop accurate and private artificial intelligence models.<n>FedProx has the potential to be a promising approach for training precise machine learning models on non-IID local medical datasets.
arXiv Detail & Related papers (2025-10-21T22:38:18Z) - Robust Training with Data Augmentation for Medical Imaging Classification [0.0]
Deep neural networks are increasingly being used to detect and diagnose medical conditions using medical imaging.<n>Despite their utility, these models are highly vulnerable to adversarial attacks and distribution shifts.<n>We propose a robust training algorithm with data augmentation to mitigate these vulnerabilities in medical image classification.
arXiv Detail & Related papers (2025-06-20T16:36:39Z) - Hierarchical Self-Supervised Adversarial Training for Robust Vision Models in Histopathology [64.46054930696052]
Adversarial attacks pose significant challenges for vision models in critical fields like healthcare.<n>Existing self-supervised adversarial training methods overlook the hierarchical structure of histopathology images.<n>We propose Hierarchical Self-Supervised Adversarial Training (HSAT), which exploits these properties to craft adversarial examples.
arXiv Detail & Related papers (2025-03-13T17:59:47Z) - Medical Multimodal Model Stealing Attacks via Adversarial Domain Alignment [79.41098832007819]
Medical multimodal large language models (MLLMs) are becoming an instrumental part of healthcare systems.<n>As medical data is scarce and protected by privacy regulations, medical MLLMs represent valuable intellectual property.<n>We introduce Adversarial Domain Alignment (ADA-STEAL), the first stealing attack against medical MLLMs.
arXiv Detail & Related papers (2025-02-04T16:04:48Z) - StealthDiffusion: Towards Evading Diffusion Forensic Detection through Diffusion Model [62.25424831998405]
StealthDiffusion is a framework that modifies AI-generated images into high-quality, imperceptible adversarial examples.
It is effective in both white-box and black-box settings, transforming AI-generated images into high-quality adversarial forgeries.
arXiv Detail & Related papers (2024-08-11T01:22:29Z) - Empowering Healthcare through Privacy-Preserving MRI Analysis [3.6394715554048234]
We introduce the Ensemble-Based Federated Learning (EBFL) Framework.
EBFL framework deviates from the conventional approach by emphasizing model features over sharing sensitive patient data.
We have achieved remarkable precision in the classification of brain tumors, including glioma, meningioma, pituitary, and non-tumor instances.
arXiv Detail & Related papers (2024-03-14T19:51:18Z) - Adversarial Medical Image with Hierarchical Feature Hiding [38.551147309335185]
adversarial examples (AEs) pose a great security flaw in deep learning based methods for medical images.
It has been discovered that conventional adversarial attacks like PGD are easy to distinguish in the feature space, resulting in accurate reactive defenses.
We propose a simple-yet-effective hierarchical feature constraint (HFC), a novel add-on to conventional white-box attacks, which assists to hide the adversarial feature in the target feature distribution.
arXiv Detail & Related papers (2023-12-04T07:04:20Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Preventing Unauthorized AI Over-Analysis by Medical Image Adversarial
Watermarking [43.17275405041853]
We present a pioneering solution named Medical Image Adversarial watermarking (MIAD-MARK)
Our approach introduces watermarks that strategically mislead unauthorized AI diagnostic models, inducing erroneous predictions without compromising the integrity of the visual content.
Our solution effectively mitigates unauthorized exploitation of medical images even in the presence of sophisticated watermark removal networks.
arXiv Detail & Related papers (2023-03-17T09:37:41Z) - Private, fair and accurate: Training large-scale, privacy-preserving AI models in medical imaging [47.99192239793597]
We evaluated the effect of privacy-preserving training of AI models regarding accuracy and fairness compared to non-private training.
Our study shows that -- under the challenging realistic circumstances of a real-life clinical dataset -- the privacy-preserving training of diagnostic deep learning models is possible with excellent diagnostic accuracy and fairness.
arXiv Detail & Related papers (2023-02-03T09:49:13Z) - Privacy-preserving medical image analysis [53.4844489668116]
We present PriMIA, a software framework designed for privacy-preserving machine learning (PPML) in medical imaging.
We show significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets.
We empirically evaluate the framework's security against a gradient-based model inversion attack.
arXiv Detail & Related papers (2020-12-10T13:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.