FedBayes: A Zero-Trust Federated Learning Aggregation to Defend Against
Adversarial Attacks
- URL: http://arxiv.org/abs/2312.04587v1
- Date: Mon, 4 Dec 2023 21:37:50 GMT
- Title: FedBayes: A Zero-Trust Federated Learning Aggregation to Defend Against
Adversarial Attacks
- Authors: Marc Vucovich, Devin Quinn, Kevin Choi, Christopher Redino, Abdul
Rahman, Edward Bowen
- Abstract summary: Federated learning has created a decentralized method to train a machine learning model without needing direct access to client data.
malicious clients are able to corrupt the global model and degrade performance across all clients within a federation.
Our novel aggregation method, FedBayes, mitigates the effect of a malicious client by calculating the probabilities of a client's model weights.
- Score: 1.689369173057502
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated learning has created a decentralized method to train a machine
learning model without needing direct access to client data. The main goal of a
federated learning architecture is to protect the privacy of each client while
still contributing to the training of the global model. However, the main
advantage of privacy in federated learning is also the easiest aspect to
exploit. Without being able to see the clients' data, it is difficult to
determine the quality of the data. By utilizing data poisoning methods, such as
backdoor or label-flipping attacks, or by sending manipulated information about
their data back to the server, malicious clients are able to corrupt the global
model and degrade performance across all clients within a federation. Our novel
aggregation method, FedBayes, mitigates the effect of a malicious client by
calculating the probabilities of a client's model weights given to the prior
model's weights using Bayesian statistics. Our results show that this approach
negates the effects of malicious clients and protects the overall federation.
Related papers
- ConDa: Fast Federated Unlearning with Contribution Dampening [46.074452659791575]
ConDa is a framework that performs efficient unlearning by tracking down the parameters which affect the global model for each client.
We perform experiments on multiple datasets and demonstrate that ConDa is effective to forget a client's data.
arXiv Detail & Related papers (2024-10-05T12:45:35Z) - Using Synthetic Data to Mitigate Unfairness and Preserve Privacy through Single-Shot Federated Learning [6.516872951510096]
We propose a strategy that promotes fair predictions across clients without the need to pass information between the clients and server.
We then pass each client's synthetic dataset to the server, the collection of which is used to train the server model.
arXiv Detail & Related papers (2024-09-14T21:04:11Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - FedRecover: Recovering from Poisoning Attacks in Federated Learning
using Historical Information [67.8846134295194]
Federated learning is vulnerable to poisoning attacks in which malicious clients poison the global model.
We propose FedRecover, which can recover an accurate global model from poisoning attacks with small cost for the clients.
arXiv Detail & Related papers (2022-10-20T00:12:34Z) - A New Implementation of Federated Learning for Privacy and Security
Enhancement [27.612480082254486]
Federated learning (FL) has emerged as a new machine learning setting.
No local data needs to be shared, and privacy can be well protected.
We propose a model update based federated averaging algorithm to defend against Byzantine attacks.
arXiv Detail & Related papers (2022-08-03T03:13:19Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.