FedCom: A Byzantine-Robust Local Model Aggregation Rule Using Data
Commitment for Federated Learning
- URL: http://arxiv.org/abs/2104.08020v1
- Date: Fri, 16 Apr 2021 10:29:26 GMT
- Title: FedCom: A Byzantine-Robust Local Model Aggregation Rule Using Data
Commitment for Federated Learning
- Authors: Bo Zhao, Peng Sun, Liming Fang, Tao Wang, Keyu Jiang
- Abstract summary: Federated learning allows multiple clients to collaboratively train statistical models without disclosing private data.
There may exist Byzantine workers launching data poisoning and model poisoning attacks.
Most of the existing Byzantine-robust FL schemes are either ineffective against several advanced poisoning attacks or need to centralize a public validation dataset.
We propose FedCom, a novel Byzantine-robust federated learning framework by incorporating the idea of commitment from cryptography.
- Score: 8.165895353387853
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated learning (FL) is a promising privacy-preserving distributed machine
learning methodology that allows multiple clients (i.e., workers) to
collaboratively train statistical models without disclosing private training
data. Due to the characteristics of data remaining localized and the
uninspected on-device training process, there may exist Byzantine workers
launching data poisoning and model poisoning attacks, which would seriously
deteriorate model performance or prevent the model from convergence. Most of
the existing Byzantine-robust FL schemes are either ineffective against several
advanced poisoning attacks or need to centralize a public validation dataset,
which is intractable in FL. Moreover, to the best of our knowledge, none of the
existing Byzantine-robust distributed learning methods could well exert its
power in Non-Independent and Identically distributed (Non-IID) data among
clients. To address these issues, we propose FedCom, a novel Byzantine-robust
federated learning framework by incorporating the idea of commitment from
cryptography, which could achieve both data poisoning and model poisoning
tolerant FL under practical Non-IID data partitions. Specifically, in FedCom,
each client is first required to make a commitment to its local training data
distribution. Then, we identify poisoned datasets by comparing the Wasserstein
distance among commitments submitted by different clients. Furthermore, we
distinguish abnormal local model updates from benign ones by testing each local
model's behavior on its corresponding data commitment. We conduct an extensive
performance evaluation of FedCom. The results demonstrate its effectiveness and
superior performance compared to the state-of-the-art Byzantine-robust schemes
in defending against typical data poisoning and model poisoning attacks under
practical Non-IID data distributions.
Related papers
- Defending against Data Poisoning Attacks in Federated Learning via User Elimination [0.0]
This paper introduces a novel framework focused on the strategic elimination of adversarial users within a federated model.
We detect anomalies in the aggregation phase of the Federated Algorithm, by integrating metadata gathered by the local training instances with Differential Privacy techniques.
Our experiments demonstrate the efficacy of our methods, significantly mitigating the risk of data poisoning while maintaining user privacy and model performance.
arXiv Detail & Related papers (2024-04-19T10:36:00Z) - Precision Guided Approach to Mitigate Data Poisoning Attacks in Federated Learning [4.907460152017894]
Federated Learning (FL) is a collaborative learning paradigm enabling participants to collectively train a shared machine learning model.
Current FL defense strategies against data poisoning attacks either involve a trade-off between accuracy and robustness.
We present FedZZ, which harnesses a zone-based deviating update (ZBDU) mechanism to effectively counter data poisoning attacks in FL.
arXiv Detail & Related papers (2024-04-05T14:37:49Z) - Towards Robust Federated Learning via Logits Calibration on Non-IID Data [49.286558007937856]
Federated learning (FL) is a privacy-preserving distributed management framework based on collaborative model training of distributed devices in edge networks.
Recent studies have shown that FL is vulnerable to adversarial examples, leading to a significant drop in its performance.
In this work, we adopt the adversarial training (AT) framework to improve the robustness of FL models against adversarial example (AE) attacks.
arXiv Detail & Related papers (2024-03-05T09:18:29Z) - SureFED: Robust Federated Learning via Uncertainty-Aware Inward and
Outward Inspection [29.491675102478798]
We introduce SureFED, a novel framework for robust federated learning.
SureFED establishes trust using the local information of benign clients.
We theoretically prove the robustness of our algorithm against data and model poisoning attacks.
arXiv Detail & Related papers (2023-08-04T23:51:05Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - Benchmarking FedAvg and FedCurv for Image Classification Tasks [1.376408511310322]
This paper focuses on the problem of statistical heterogeneity of the data in the same federated network.
Several Federated Learning algorithms, such as FedAvg, FedProx and Federated Curvature (FedCurv) have already been proposed.
As a side product of this work, we release the non-IID version of the datasets we used so to facilitate further comparisons from the FL community.
arXiv Detail & Related papers (2023-03-31T10:13:01Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - Data-SUITE: Data-centric identification of in-distribution incongruous
examples [81.21462458089142]
Data-SUITE is a data-centric framework to identify incongruous regions of in-distribution (ID) data.
We empirically validate Data-SUITE's performance and coverage guarantees.
arXiv Detail & Related papers (2022-02-17T18:58:31Z) - Byzantine-Robust Federated Learning via Credibility Assessment on
Non-IID Data [1.4146420810689422]
Federated learning is a novel framework that enables resource-constrained edge devices to jointly learn a model.
Standard federated learning is vulnerable to Byzantine attacks.
We propose a Byzantine-robust framework for federated learning via credibility assessment on non-iid data.
arXiv Detail & Related papers (2021-09-06T12:18:02Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.