DeFL: Decentralized Weight Aggregation for Cross-silo Federated Learning
- URL: http://arxiv.org/abs/2208.00848v1
- Date: Mon, 1 Aug 2022 13:36:49 GMT
- Title: DeFL: Decentralized Weight Aggregation for Cross-silo Federated Learning
- Authors: Jialiang Han, Yudong Han, Gang Huang, Yun Ma
- Abstract summary: Federated learning (FL) is an emerging promising paradigm of privacy-preserving machine learning (ML)
We propose DeFL, a novel decentralized weight aggregation framework for cross-silo FL.
DeFL eliminates the central server by aggregating weights on each participating node and weights of only the current training round are maintained and synchronized among all nodes.
- Score: 2.43923223501858
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Federated learning (FL) is an emerging promising paradigm of
privacy-preserving machine learning (ML). An important type of FL is cross-silo
FL, which enables a small scale of organizations to cooperatively train a
shared model by keeping confidential data locally and aggregating weights on a
central parameter server. However, the central server may be vulnerable to
malicious attacks or software failures in practice. To address this issue, in
this paper, we propose DeFL, a novel decentralized weight aggregation framework
for cross-silo FL. DeFL eliminates the central server by aggregating weights on
each participating node and weights of only the current training round are
maintained and synchronized among all nodes. We use Multi-Krum to enable
aggregating correct weights from honest nodes and use HotStuff to ensure the
consistency of the training round number and weights among all nodes. Besides,
we theoretically analyze the Byzantine fault tolerance, convergence, and
complexity of DeFL. We conduct extensive experiments over two widely-adopted
public datasets, i.e. CIFAR-10 and Sentiment140, to evaluate the performance of
DeFL. Results show that DeFL defends against common threat models with minimal
accuracy loss, and achieves up to 100x reduction in storage overhead and up to
12x reduction in network overhead, compared to state-of-the-art decentralized
FL approaches.
Related papers
- Decentralized Personalized Federated Learning based on a Conditional Sparse-to-Sparser Scheme [5.5058010121503]
Decentralized Federated Learning (DFL) has become popular due to its robustness and avoidance of centralized coordination.
We propose a novel textitsparse-to-sparser training scheme: DA-DPFL.
Our experiments showcase that DA-DPFL substantially outperforms DFL baselines in test accuracy, while achieving up to $5$ times reduction in energy costs.
arXiv Detail & Related papers (2024-04-24T16:03:34Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - Hierarchical Personalized Federated Learning Over Massive Mobile Edge
Computing Networks [95.39148209543175]
We propose hierarchical PFL (HPFL), an algorithm for deploying PFL over massive MEC networks.
HPFL combines the objectives of training loss minimization and round latency minimization while jointly determining the optimal bandwidth allocation.
arXiv Detail & Related papers (2023-03-19T06:00:05Z) - Improving the Model Consistency of Decentralized Federated Learning [68.2795379609854]
Federated Learning (FL) discards the central server and each client only communicates with its neighbors in a decentralized communication network.
Existing DFL suffers from inconsistency among local clients, which results in inferior compared to FLFL.
We propose DFedSAMMGS, where $1lambda$ is the spectral gossip matrix and $Q$ is the number of sparse data gaps.
arXiv Detail & Related papers (2023-02-08T14:37:34Z) - How Much Does It Cost to Train a Machine Learning Model over Distributed
Data Sources? [4.222078489059043]
Federated learning allows devices to train a machine learning model without sharing their raw data.
Server-less FL approaches like gossip federated learning (GFL) and blockchain-enabled federated learning (BFL) have been proposed to mitigate these issues.
GFL is able to save the 18% of training time, the 68% of energy and the 51% of data to be shared with respect to the CFL solution, but it is not able to reach the level of accuracy of CFL.
BFL represents a viable solution for implementing decentralized learning with a higher level of security, at the cost of an extra energy usage and data sharing
arXiv Detail & Related papers (2022-09-15T08:13:40Z) - Achieving Personalized Federated Learning with Sparse Local Models [75.76854544460981]
Federated learning (FL) is vulnerable to heterogeneously distributed data.
To counter this issue, personalized FL (PFL) was proposed to produce dedicated local models for each individual user.
Existing PFL solutions either demonstrate unsatisfactory generalization towards different model architectures or cost enormous extra computation and memory.
We proposeFedSpa, a novel PFL scheme that employs personalized sparse masks to customize sparse local models on the edge.
arXiv Detail & Related papers (2022-01-27T08:43:11Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z) - GFL: A Decentralized Federated Learning Framework Based On Blockchain [15.929643607462353]
We propose Galaxy Federated Learning Framework(GFL), a decentralized FL framework based on blockchain.
GFL introduces the consistent hashing algorithm to improve communication performance and proposes a novel ring decentralized FL algorithm(RDFL) to improve decentralized FL performance and bandwidth utilization.
Our experiments show that GFL improves communication performance and decentralized FL performance under the data poisoning of malicious nodes and non-independent and identically distributed(Non-IID) datasets.
arXiv Detail & Related papers (2020-10-21T13:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.