Demystifying Swarm Learning: A New Paradigm of Blockchain-based
Decentralized Federated Learning
- URL: http://arxiv.org/abs/2201.05286v2
- Date: Mon, 17 Jan 2022 14:30:30 GMT
- Title: Demystifying Swarm Learning: A New Paradigm of Blockchain-based
Decentralized Federated Learning
- Authors: Jialiang Han, Yun Ma, Yudong Han
- Abstract summary: Federated learning (FL) is an emerging promising privacy-preserving machine learning paradigm.
FL keeps users' private data on devices and exchanges the gradients of local models to cooperatively train a shared Deep Learning (DL) model on central custodians.
Swarm Learning (SL) introduces a permissioned blockchain to securely onboard members and dynamically elect the leader.
- Score: 0.9638328197615845
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) is an emerging promising privacy-preserving machine
learning paradigm and has raised more and more attention from researchers and
developers. FL keeps users' private data on devices and exchanges the gradients
of local models to cooperatively train a shared Deep Learning (DL) model on
central custodians. However, the security and fault tolerance of FL have been
increasingly discussed, because its central custodian mechanism or star-shaped
architecture can be vulnerable to malicious attacks or software failures. To
address these problems, Swarm Learning (SL) introduces a permissioned
blockchain to securely onboard members and dynamically elect the leader, which
allows performing DL in an extremely decentralized manner. Compared with
tremendous attention to SL, there are few empirical studies on SL or
blockchain-based decentralized FL, which provide comprehensive knowledge of
best practices and precautions of deploying SL in real-world scenarios.
Therefore, we conduct the first comprehensive study of SL to date, to fill the
knowledge gap between SL deployment and developers, as far as we are concerned.
In this paper, we conduct various experiments on 3 public datasets of 5
research questions, present interesting findings, quantitatively analyze the
reasons behind these findings, and provide developers and researchers with
practical suggestions. The findings have evidenced that SL is supposed to be
suitable for most application scenarios, no matter whether the dataset is
balanced, polluted, or biased over irrelevant features.
Related papers
- Privacy in Federated Learning [0.0]
Federated Learning (FL) represents a significant advancement in distributed machine learning.
This chapter delves into the core privacy concerns within FL, including the risks of data reconstruction, model inversion attacks, and membership inference.
It examines the trade-offs between model accuracy and privacy, emphasizing the importance of balancing these factors in practical implementations.
arXiv Detail & Related papers (2024-08-12T18:41:58Z) - What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights [67.72413262980272]
Severe data imbalance naturally exists among web-scale vision-language datasets.
We find CLIP pre-trained thereupon exhibits notable robustness to the data imbalance compared to supervised learning.
The robustness and discriminability of CLIP improve with more descriptive language supervision, larger data scale, and broader open-world concepts.
arXiv Detail & Related papers (2024-05-31T17:57:24Z) - Swarm Learning: A Survey of Concepts, Applications, and Trends [3.55026004901472]
Deep learning models have raised privacy and security concerns due to their reliance on large datasets on central servers.
Federated learning (FL) has introduced a novel approach to building a versatile, large-scale machine learning framework.
Swarm learning (SL) has been proposed in collaboration with Hewlett Packard Enterprise (HPE)
SL represents a decentralized machine learning framework that leverages blockchain technology for secure, scalable, and private data management.
arXiv Detail & Related papers (2024-05-01T14:59:24Z) - Enhancing Trust and Privacy in Distributed Networks: A Comprehensive Survey on Blockchain-based Federated Learning [51.13534069758711]
Decentralized approaches like blockchain offer a compelling solution by implementing a consensus mechanism among multiple entities.
Federated Learning (FL) enables participants to collaboratively train models while safeguarding data privacy.
This paper investigates the synergy between blockchain's security features and FL's privacy-preserving model training capabilities.
arXiv Detail & Related papers (2024-03-28T07:08:26Z) - A Survey on Decentralized Federated Learning [0.709016563801433]
In recent years, federated learning has become a popular paradigm for training distributed, large-scale, and privacy-preserving machine learning (ML) systems.
In a typical FL system, the central server acts only as an orchestrator; it iteratively gathers and aggregates all the local models trained by each client on its private data until convergence.
One of the most critical challenges is to overcome the centralized orchestration of the classical FL client-server architecture.
Decentralized FL solutions have emerged where all FL clients cooperate and communicate without a central server.
arXiv Detail & Related papers (2023-08-08T22:07:15Z) - Defending Against Poisoning Attacks in Federated Learning with
Blockchain [12.840821573271999]
We propose a secure and reliable federated learning system based on blockchain and distributed ledger technology.
Our system incorporates a peer-to-peer voting mechanism and a reward-and-slash mechanism, which are powered by on-chain smart contracts, to detect and deter malicious behaviors.
arXiv Detail & Related papers (2023-07-02T11:23:33Z) - Does Decentralized Learning with Non-IID Unlabeled Data Benefit from
Self Supervision? [51.00034621304361]
We study decentralized learning with unlabeled data through the lens of self-supervised learning (SSL)
We study the effectiveness of contrastive learning algorithms under decentralized learning settings.
arXiv Detail & Related papers (2022-10-20T01:32:41Z) - Federated Zero-Shot Learning for Visual Recognition [55.65879596326147]
We propose a novel Federated Zero-Shot Learning FedZSL framework.
FedZSL learns a central model from the decentralized data residing on edge devices.
The effectiveness and robustness of FedZSL are demonstrated by extensive experiments conducted on three zero-shot benchmark datasets.
arXiv Detail & Related papers (2022-09-05T14:49:34Z) - Divergence-aware Federated Self-Supervised Learning [16.025681567222477]
We introduce a generalized FedSSL framework that embraces existing SSL methods based on Siamese networks.
We then propose a new approach for model update, Federated Divergence-aware Exponential Moving Average update (FedEMA)
arXiv Detail & Related papers (2022-04-09T04:15:02Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.