Performance Weighting for Robust Federated Learning Against Corrupted
Sources
- URL: http://arxiv.org/abs/2205.01184v1
- Date: Mon, 2 May 2022 20:01:44 GMT
- Title: Performance Weighting for Robust Federated Learning Against Corrupted
Sources
- Authors: Dimitris Stripelis, Marcin Abram, Jose Luis Ambite
- Abstract summary: Federated learning has emerged as a dominant computational paradigm for distributed machine learning.
In real-world applications, a federated environment may consist of a mixture of benevolent and malicious clients.
We show that the standard global aggregation scheme of local weights is inefficient in the presence of corrupted clients.
- Score: 1.76179873429447
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated Learning has emerged as a dominant computational paradigm for
distributed machine learning. Its unique data privacy properties allow us to
collaboratively train models while offering participating clients certain
privacy-preserving guarantees. However, in real-world applications, a federated
environment may consist of a mixture of benevolent and malicious clients, with
the latter aiming to corrupt and degrade federated model's performance.
Different corruption schemes may be applied such as model poisoning and data
corruption. Here, we focus on the latter, the susceptibility of federated
learning to various data corruption attacks. We show that the standard global
aggregation scheme of local weights is inefficient in the presence of corrupted
clients. To mitigate this problem, we propose a class of task-oriented
performance-based methods computed over a distributed validation dataset with
the goal to detect and mitigate corrupted clients. Specifically, we construct a
robust weight aggregation scheme based on geometric mean and demonstrate its
effectiveness under random label shuffling and targeted label flipping attacks.
Related papers
- BRFL: A Blockchain-based Byzantine-Robust Federated Learning Model [8.19957400564017]
Federated learning, which stores data in distributed nodes and shares only model parameters, has gained significant attention for addressing this concern.
A challenge arises in federated learning due to the Byzantine Attack Problem, where malicious local models can compromise the global model's performance during aggregation.
This article proposes the integration of Byzantine-Robust Federated Learning (BRLF) model that combines federated learning with blockchain technology.
arXiv Detail & Related papers (2023-10-20T10:21:50Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - FedCC: Robust Federated Learning against Model Poisoning Attacks [0.0]
Federated Learning is designed to address privacy concerns in learning models.
New distributed paradigm safeguards data privacy but differentiates the attack surface due to the server's inaccessibility to local datasets.
arXiv Detail & Related papers (2022-12-05T01:52:32Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Anomaly Detection through Unsupervised Federated Learning [0.0]
Federated learning is proving to be one of the most promising paradigms for leveraging distributed resources.
We propose a novel method in which, through a preprocessing phase, clients are grouped into communities.
The resulting anomaly detection model is then shared and used to detect anomalies within the clients of the same community.
arXiv Detail & Related papers (2022-09-09T08:45:47Z) - RobustFed: A Truth Inference Approach for Robust Federated Learning [9.316565110931743]
Federated learning is a framework that enables clients to train a collaboratively global model under a central server's orchestration.
The aggregation step in federated learning is vulnerable to adversarial attacks as the central server cannot manage clients' behavior.
We propose a novel robust aggregation algorithm inspired by the truth inference methods in crowdsourcing.
arXiv Detail & Related papers (2021-07-18T09:34:57Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Auto-weighted Robust Federated Learning with Corrupted Data Sources [7.475348174281237]
Federated learning provides a communication-efficient and privacy-preserving training process.
Standard federated learning techniques that naively minimize an average loss function are vulnerable to data corruptions.
We propose Auto-weighted Robust Federated Learning (arfl) to provide robustness against corrupted data sources.
arXiv Detail & Related papers (2021-01-14T21:54:55Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.