Histopathological Image Classification and Vulnerability Analysis using
Federated Learning
- URL: http://arxiv.org/abs/2310.07380v1
- Date: Wed, 11 Oct 2023 10:55:14 GMT
- Title: Histopathological Image Classification and Vulnerability Analysis using
Federated Learning
- Authors: Sankalp Vyas, Amar Nath Patra, Raj Mani Shukla
- Abstract summary: A global model sends its copy to all clients who train these copies, and the clients send the updates (weights) back to it.
Data privacy is protected during training, as it is conducted locally on the clients' devices.
However, the global model is susceptible to data poisoning attacks.
- Score: 1.104960878651584
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Healthcare is one of the foremost applications of machine learning (ML).
Traditionally, ML models are trained by central servers, which aggregate data
from various distributed devices to forecast the results for newly generated
data. This is a major concern as models can access sensitive user information,
which raises privacy concerns. A federated learning (FL) approach can help
address this issue: A global model sends its copy to all clients who train
these copies, and the clients send the updates (weights) back to it. Over time,
the global model improves and becomes more accurate. Data privacy is protected
during training, as it is conducted locally on the clients' devices.
However, the global model is susceptible to data poisoning. We develop a
privacy-preserving FL technique for a skin cancer dataset and show that the
model is prone to data poisoning attacks. Ten clients train the model, but one
of them intentionally introduces flipped labels as an attack. This reduces the
accuracy of the global model. As the percentage of label flipping increases,
there is a noticeable decrease in accuracy. We use a stochastic gradient
descent optimization algorithm to find the most optimal accuracy for the model.
Although FL can protect user privacy for healthcare diagnostics, it is also
vulnerable to data poisoning, which must be addressed.
Related papers
- FedBayes: A Zero-Trust Federated Learning Aggregation to Defend Against
Adversarial Attacks [1.689369173057502]
Federated learning has created a decentralized method to train a machine learning model without needing direct access to client data.
malicious clients are able to corrupt the global model and degrade performance across all clients within a federation.
Our novel aggregation method, FedBayes, mitigates the effect of a malicious client by calculating the probabilities of a client's model weights.
arXiv Detail & Related papers (2023-12-04T21:37:50Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - DYNAFED: Tackling Client Data Heterogeneity with Global Dynamics [60.60173139258481]
Local training on non-iid distributed data results in deflected local optimum.
A natural solution is to gather all client data onto the server, such that the server has a global view of the entire data distribution.
In this paper, we put forth an idea to collect and leverage global knowledge on the server without hindering data privacy.
arXiv Detail & Related papers (2022-11-20T06:13:06Z) - Federated Self-Supervised Contrastive Learning and Masked Autoencoder
for Dermatological Disease Diagnosis [15.20791611477636]
In dermatological disease diagnosis, the private data collected by mobile dermatology assistants exist on distributed mobile devices of patients.
We propose two federated self-supervised learning frameworks for dermatological disease diagnosis with limited labels.
Experiments on dermatological disease datasets show superior accuracy of the proposed frameworks over state-of-the-arts.
arXiv Detail & Related papers (2022-08-24T02:49:35Z) - FL-Defender: Combating Targeted Attacks in Federated Learning [7.152674461313707]
Federated learning (FL) enables learning a global machine learning model from local data distributed among a set of participating workers.
FL is vulnerable to targeted poisoning attacks that negatively impact the integrity of the learned model.
We propose textitFL-Defender as a method to combat FL targeted attacks.
arXiv Detail & Related papers (2022-07-02T16:04:46Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - FedProf: Optimizing Federated Learning with Dynamic Data Profiling [9.74942069718191]
Federated Learning (FL) has shown great potential as a privacy-preserving solution to learning from decentralized data.
A large proportion of the clients are probably in possession of only low-quality data that are biased, noisy or even irrelevant.
We propose a novel approach to optimizing FL under such circumstances without breaching data privacy.
arXiv Detail & Related papers (2021-02-02T20:10:14Z) - Knowledge-Enriched Distributional Model Inversion Attacks [49.43828150561947]
Model inversion (MI) attacks are aimed at reconstructing training data from model parameters.
We present a novel inversion-specific GAN that can better distill knowledge useful for performing attacks on private models from public data.
Our experiments show that the combination of these techniques can significantly boost the success rate of the state-of-the-art MI attacks by 150%.
arXiv Detail & Related papers (2020-10-08T16:20:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.