Aggregation Service for Federated Learning: An Efficient, Secure, and
More Resilient Realization
- URL: http://arxiv.org/abs/2202.01971v1
- Date: Fri, 4 Feb 2022 05:03:46 GMT
- Title: Aggregation Service for Federated Learning: An Efficient, Secure, and
More Resilient Realization
- Authors: Yifeng Zheng and Shangqi Lai and Yi Liu and Xingliang Yuan and Xun Yi
and Cong Wang
- Abstract summary: We present a system design which offers efficient protection of individual model updates throughout the learning procedure.
Our system achieves accuracy comparable to the baseline, with practical performance.
- Score: 22.61730495802799
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning has recently emerged as a paradigm promising the benefits
of harnessing rich data from diverse sources to train high quality models, with
the salient features that training datasets never leave local devices. Only
model updates are locally computed and shared for aggregation to produce a
global model. While federated learning greatly alleviates the privacy concerns
as opposed to learning with centralized data, sharing model updates still poses
privacy risks. In this paper, we present a system design which offers efficient
protection of individual model updates throughout the learning procedure,
allowing clients to only provide obscured model updates while a cloud server
can still perform the aggregation. Our federated learning system first departs
from prior works by supporting lightweight encryption and aggregation, and
resilience against drop-out clients with no impact on their participation in
future rounds. Meanwhile, prior work largely overlooks bandwidth efficiency
optimization in the ciphertext domain and the support of security against an
actively adversarial cloud server, which we also fully explore in this paper
and provide effective and efficient mechanisms. Extensive experiments over
several benchmark datasets (MNIST, CIFAR-10, and CelebA) show our system
achieves accuracy comparable to the plaintext baseline, with practical
performance.
Related papers
- Few-Shot Class-Incremental Learning with Non-IID Decentralized Data [12.472285188772544]
Few-shot class-incremental learning is crucial for developing scalable and adaptive intelligent systems.
This paper introduces federated few-shot class-incremental learning, a decentralized machine learning paradigm.
We present a synthetic data-driven framework that leverages replay buffer data to maintain existing knowledge and facilitate the acquisition of new knowledge.
arXiv Detail & Related papers (2024-09-18T02:48:36Z) - When Swarm Learning meets energy series data: A decentralized collaborative learning design based on blockchain [10.099134773737939]
Machine learning models offer the capability to forecast future energy production or consumption.
However, legal and policy constraints within specific energy sectors present technical hurdles in utilizing data from diverse sources.
We propose adopting a Swarm Learning scheme, which replaces the centralized server with a blockchain-based distributed network.
arXiv Detail & Related papers (2024-06-07T08:42:26Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - FedGEMS: Federated Learning of Larger Server Models via Selective
Knowledge Fusion [19.86388925556209]
Federated Learning (FL) has emerged as a viable solution to learn a global model while keeping data private.
In this work, we investigate a novel paradigm to take advantage of a powerful server model to break through model capacity in FL.
arXiv Detail & Related papers (2021-10-21T10:06:44Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Byzantine-robust Federated Learning through Spatial-temporal Analysis of
Local Model Updates [6.758334200305236]
Federated Learning (FL) enables multiple distributed clients (e.g., mobile devices) to collaboratively train a centralized model while keeping the training data locally on the client.
In this paper, we propose to mitigate these failures and attacks from a spatial-temporal perspective.
Specifically, we use a clustering-based method to detect and exclude incorrect updates by leveraging their geometric properties in the parameter space.
arXiv Detail & Related papers (2021-07-03T18:48:11Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.