Separation of Powers in Federated Learning
- URL: http://arxiv.org/abs/2105.09400v1
- Date: Wed, 19 May 2021 21:00:44 GMT
- Title: Separation of Powers in Federated Learning
- Authors: Pau-Chen Cheng, Kevin Eykholt, Zhongshu Gu, Hani Jamjoom, K. R.
Jayaram, Enriquillo Valdez, Ashish Verma
- Abstract summary: Federated Learning (FL) enables collaborative training among mutually distrusting parties.
Recent attacks have reconstructed large fractions of training data from ostensibly "sanitized" model updates.
We introduce TRUDA, a new cross-silo FL system, employing a trustworthy and decentralized aggregation architecture.
- Score: 5.966064140042439
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) enables collaborative training among mutually
distrusting parties. Model updates, rather than training data, are concentrated
and fused in a central aggregation server. A key security challenge in FL is
that an untrustworthy or compromised aggregation process might lead to
unforeseeable information leakage. This challenge is especially acute due to
recently demonstrated attacks that have reconstructed large fractions of
training data from ostensibly "sanitized" model updates.
In this paper, we introduce TRUDA, a new cross-silo FL system, employing a
trustworthy and decentralized aggregation architecture to break down
information concentration with regard to a single aggregator. Based on the
unique computational properties of model-fusion algorithms, all exchanged model
updates in TRUDA are disassembled at the parameter-granularity and re-stitched
to random partitions designated for multiple TEE-protected aggregators. Thus,
each aggregator only has a fragmentary and shuffled view of model updates and
is oblivious to the model architecture. Our new security mechanisms can
fundamentally mitigate training reconstruction attacks, while still preserving
the final accuracy of trained models and keeping performance overheads low.
Related papers
- Federated Learning for Misbehaviour Detection with Variational Autoencoders and Gaussian Mixture Models [0.2999888908665658]
Federated Learning (FL) has become an attractive approach to collaboratively train Machine Learning (ML) models.
This work proposes a novel unsupervised FL approach for the identification of potential misbehavior in vehicular environments.
We leverage the computing capabilities of public cloud services for model aggregation purposes.
arXiv Detail & Related papers (2024-05-16T08:49:50Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - Tackling Computational Heterogeneity in FL: A Few Theoretical Insights [68.8204255655161]
We introduce and analyse a novel aggregation framework that allows for formalizing and tackling computational heterogeneous data.
Proposed aggregation algorithms are extensively analyzed from a theoretical, and an experimental prospective.
arXiv Detail & Related papers (2023-07-12T16:28:21Z) - MimiC: Combating Client Dropouts in Federated Learning by Mimicking Central Updates [8.363640358539605]
Federated learning (FL) is a promising framework for privacy-preserving collaborative learning.
This paper investigates the convergence of the classical FedAvg algorithm with arbitrary client dropouts.
We then design a novel training algorithm named MimiC, where the server modifies each received model update based on the previous ones.
arXiv Detail & Related papers (2023-06-21T12:11:02Z) - Towards More Suitable Personalization in Federated Learning via
Decentralized Partial Model Training [67.67045085186797]
Almost all existing systems have to face large communication burdens if the central FL server fails.
It personalizes the "right" in the deep models by alternately updating the shared and personal parameters.
To further promote the shared parameters aggregation process, we propose DFed integrating the local Sharpness Miniization.
arXiv Detail & Related papers (2023-05-24T13:52:18Z) - Stochastic Coded Federated Learning: Theoretical Analysis and Incentive
Mechanism Design [18.675244280002428]
We propose a novel FL framework named coded federated learning (SCFL) that leverages coded computing techniques.
In SCFL, each edge device uploads a privacy-preserving coded dataset to the server, which is generated by adding noise to the projected local dataset.
We show that SCFL learns a better model within the given time and achieves a better privacy-performance tradeoff than the baseline methods.
arXiv Detail & Related papers (2022-11-08T09:58:36Z) - Certified Robustness in Federated Learning [54.03574895808258]
We study the interplay between federated training, personalization, and certified robustness.
We find that the simple federated averaging technique is effective in building not only more accurate, but also more certifiably-robust models.
arXiv Detail & Related papers (2022-06-06T12:10:53Z) - FedRAD: Federated Robust Adaptive Distillation [7.775374800382709]
Collaborative learning framework by typically aggregating model updates is vulnerable to model poisoning attacks from adversarial clients.
We propose a novel robust aggregation method, Federated Robust Adaptive Distillation (FedRAD), to detect adversaries and robustly aggregate local models.
The results show that FedRAD outperforms all other aggregators in the presence of adversaries, as well as in heterogeneous data distributions.
arXiv Detail & Related papers (2021-12-02T16:50:57Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Byzantine-robust Federated Learning through Spatial-temporal Analysis of
Local Model Updates [6.758334200305236]
Federated Learning (FL) enables multiple distributed clients (e.g., mobile devices) to collaboratively train a centralized model while keeping the training data locally on the client.
In this paper, we propose to mitigate these failures and attacks from a spatial-temporal perspective.
Specifically, we use a clustering-based method to detect and exclude incorrect updates by leveraging their geometric properties in the parameter space.
arXiv Detail & Related papers (2021-07-03T18:48:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.