DeFed: A Principled Decentralized and Privacy-Preserving Federated
Learning Algorithm
- URL: http://arxiv.org/abs/2107.07171v1
- Date: Thu, 15 Jul 2021 07:39:19 GMT
- Title: DeFed: A Principled Decentralized and Privacy-Preserving Federated
Learning Algorithm
- Authors: Ye Yuan, Ruijuan Chen, Chuan Sun, Maolin Wang, Feng Hua, Xinlei Yi,
Tao Yang and Jun Liu
- Abstract summary: Federated learning enables a large number of clients to participate in learning a shared model while maintaining the training data stored in each client.
Here we propose a principled decentralized federated learning algorithm (DeFed), which removes the central client in the classical Federated Averaging (FedAvg) setting.
The proposed algorithm is proven to reach the global minimum with a convergence rate of $O(1/T)$ when the loss function is smooth and strongly convex, where $T$ is the number of iterations in gradient descent.
- Score: 10.487593244018933
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning enables a large number of clients to participate in
learning a shared model while maintaining the training data stored in each
client, which protects data privacy and security. Till now, federated learning
frameworks are built in a centralized way, in which a central client is needed
for collecting and distributing information from every other client. This not
only leads to high communication pressure at the central client, but also
renders the central client highly vulnerable to failure and attack. Here we
propose a principled decentralized federated learning algorithm (DeFed), which
removes the central client in the classical Federated Averaging (FedAvg)
setting and only relies information transmission between clients and their
local neighbors. The proposed DeFed algorithm is proven to reach the global
minimum with a convergence rate of $O(1/T)$ when the loss function is smooth
and strongly convex, where $T$ is the number of iterations in gradient descent.
Finally, the proposed algorithm has been applied to a number of toy examples to
demonstrate its effectiveness.
Related papers
- Achieving Linear Speedup in Asynchronous Federated Learning with
Heterogeneous Clients [30.135431295658343]
Federated learning (FL) aims to learn a common global model without exchanging or transferring the data that are stored locally at different clients.
In this paper, we propose an efficient federated learning (AFL) framework called DeFedAvg.
DeFedAvg is the first AFL algorithm that achieves the desirable linear speedup property, which indicates its high scalability.
arXiv Detail & Related papers (2024-02-17T05:22:46Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - FedBayes: A Zero-Trust Federated Learning Aggregation to Defend Against
Adversarial Attacks [1.689369173057502]
Federated learning has created a decentralized method to train a machine learning model without needing direct access to client data.
malicious clients are able to corrupt the global model and degrade performance across all clients within a federation.
Our novel aggregation method, FedBayes, mitigates the effect of a malicious client by calculating the probabilities of a client's model weights.
arXiv Detail & Related papers (2023-12-04T21:37:50Z) - Re-Weighted Softmax Cross-Entropy to Control Forgetting in Federated
Learning [14.196701066823499]
In Federated Learning, a global model is learned by aggregating model updates computed at a set of independent client nodes.
We show that individual client models experience a catastrophic forgetting with respect to data from other clients.
We propose an efficient approach that modifies the cross-entropy objective on a per-client basis by re-weighting the softmax logits prior to computing the loss.
arXiv Detail & Related papers (2023-04-11T14:51:55Z) - Efficient Distribution Similarity Identification in Clustered Federated
Learning via Principal Angles Between Client Data Subspaces [59.33965805898736]
Clustered learning has been shown to produce promising results by grouping clients into clusters.
Existing FL algorithms are essentially trying to group clients together with similar distributions.
Prior FL algorithms attempt similarities indirectly during training.
arXiv Detail & Related papers (2022-09-21T17:37:54Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Game of Gradients: Mitigating Irrelevant Clients in Federated Learning [3.2095659532757916]
Federated learning (FL) deals with multiple clients participating in collaborative training of a machine learning model under the orchestration of a central server.
In this setup, each client's data is private to itself and is not transferable to other clients or the server.
We refer to these problems as Federated Relevant Client Selection (FRCS)
arXiv Detail & Related papers (2021-10-23T16:34:42Z) - Federated Noisy Client Learning [105.00756772827066]
Federated learning (FL) collaboratively aggregates a shared global model depending on multiple local clients.
Standard FL methods ignore the noisy client issue, which may harm the overall performance of the aggregated model.
We propose Federated Noisy Client Learning (Fed-NCL), which is a plug-and-play algorithm and contains two main components.
arXiv Detail & Related papers (2021-06-24T11:09:17Z) - Decentralized Federated Averaging [17.63112147669365]
Federated averaging (FedAvg) is a communication efficient algorithm for the distributed training with an enormous number of clients.
We study the decentralized FedAvg with momentum (DFedAvgM), which is implemented on clients that are connected by an undirected graph.
arXiv Detail & Related papers (2021-04-23T02:01:30Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.